text
stringlengths
0
316k
year
stringclasses
50 values
No
stringclasses
911 values
Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Long Papers), pages 2687–2696 Melbourne, Australia, July 15 - 20, 2018. c⃝2018 Association for Computational Linguistics 2687 Pre- and In-Parsing Models for Neural Empty Category Detection Yufei Chen, Yuanyuan Zhao, Weiwei Sun and Xiaojun Wan Institute of Computer Science and Technology, Peking University The MOE Key Laboratory of Computational Linguistics, Peking University [email protected], [email protected], {ws,wanxiaojun}@pku.edu.cn Abstract Motivated by the positive impact of empty categories on syntactic parsing, we study neural models for pre- and in-parsing detection of empty categories, which has not previously been investigated. We find several non-obvious facts: (a) BiLSTM can capture non-local contextual information which is essential for detecting empty categories, (b) even with a BiLSTM, syntactic information is still able to enhance the detection, and (c) automatic detection of empty categories improves parsing quality for overt words. Our neural ECD models outperform the prior state-of-the-art by significant margins. 1 Introduction Encoding unpronounced nominal elements, such as dropped pronouns and traces of dislocated elements, the empty category is an important piece of machinery in representing the (deep) syntactic structure of a sentence (Carnie, 2012). Figure 1 shows an example. In linguistic theory, e.g. Government and Binding (GB; Chomsky, 1981), empty category is a key concept bridging S-Structure and D-Structure, due to its possible contribution to trace movements. In practical treebanking, empty categories have been used to indicate long-distance dependencies, discontinuous constituents, and certain dropped elements (Marcus et al., 1993; Xue et al., 2005). Recently, there has been an increasing interest in automatic empty category detection (ECD; Johnson, 2002; Seeker et al., 2012; Xue and Yang, 2013; Wang et al., 2015). And it has been shown that ECD is able to improve the linear model-based dependency parsing (Zhang et al., 2017b). There are two key dimensions of approaches Pre-Parsing In-Parsing Post-Parsing Linear ✔ ✔ ✔ Neural ✘ ✘ ✔ Table 1: ECD approaches that have been investigated. for ECD: the relationship with parsing and statistical disambiguation. Considering the relationship with parsing, we can divide ECD models into three types: (1) Pre-parsing approach (e.g. Dienes and Dubey (2003)) where empty categories are identified without using syntactic analysis, (2) In-parsing approach (e.g. Cai et al. (2011); Zhang et al. (2017b)) where detection is integrated into a parsing model, and (3) Post-parsing approach (e.g. Johnson (2002); Wang et al. (2015)) where parser outputs are utilized as clues to determine the existence of empty categories. For disambiguation, while early work on dependency parsing focused on linear models, recent work started exploring deep learning techniques for the post-parsing approach (Wang et al., 2015). From the above two dimensions, we show all existing systems for ECD in Table 1. Neural models for pre- and in-parsing ECD have not been studied yet. In this paper, we fill this gap in the literature. It is obvious that empty categories are highly related to surface syntactic analysis. To determine the existence of empty elements between two overt words relies on not only the sequential contexts but also the hierarchical contexts. Traditional linear structured prediction models, e.g. conditional random fields (CRF), for sequence structures are rather weak to capture hierarchical contextual information which is essentially non-local for their architectures. Accordingly, pre-parsing models based on linear disambiguation techniques fail to produce comparable accuracy to the other two models. In striking contrast, RNN based se2688 上海 浦东 最近 颁布了∅1 ∅2 涉及 经济 领域 的 七十一件 法规性 文件 Shanghai Pudong recently issue AS involve economic field DE 71 M regulatory document root Figure 1: An example from CTB: Shanghai Pudong recently enacted 71 regulatory documents involving the economic fields. The dependency structure is according to Xue (2007). “∅1” indicates a null operator that represents empty relative pronouns. “∅2” indicates a trace left by relativization. quence labeling models have been shown very powerful to capture non-local information, and therefore have great potential to advance the preparsing approach for ECD. In this paper, we propose a new bidirectional LSTM (BiLSTM) model for pre-parsing ECD using information about contextual words. Previous studies highlight the usefulness of syntactic analysis for ECD. Furthermore, syntactic parsing of overt words can benefit from detection of empty elements and vice versa (Zhang et al., 2017b). In this paper, we follow Zhang et al.’s encouraging results obtained with linear models and study first- and second-order neural models for inparsing ECD. The main challenge for neural inparsing ECD is to encode empty element candidates and integrate the corresponding embeddings into a parsing model. We focus on the state-ofthe-art parsing architecture developed by Kiperwasser and Goldberg (2016) and Dozat and Manning (2016), which use BiLSTMs to extract features from contexts followed by a nonlinear transformation to perform local scoring. To evaluate the effectiveness of deep learning techniques for ECD, we conduct experiments on a pro-drop language, i.e. Chinese. The empirical evaluation indicates some non-obvious facts: 1. Neural ECD models outperform the prior state-of-the-art by significant margins. Even a pre-parsing model without any syntactic information outperforms the best existing linear in-parsing and post-parsing ECD models. 2. Incorporating empty elements can help neural dependency parsing. This parallels Zhang et al.’s investigation on linear models. 3. Our in-parsing neural models obtain better predictions than the pre-parsing model. The implementation of all models is available at https://github.com/draplater/ empty-parser. 2 Pre-Parsing Detection 2.1 Context of Empty Categories Sequential Context Perhaps, it is the most intuitive idea to view a natural language sentence as a word-by-word sequence. Analyzing contextual information by modeling neighboring words according to this sequential structure is a very basic view for dealing with a large number of NLP tasks, e.g. POS tagging and syntactic parsing. It is also important to consider sequential contexts for ECD to derive the horizontal features that exploit the lexical context of the current pending point, presented as one or more preceding and following word tokens, as well as their part-of-speech tags (POS). Hierarchical Context The detection of ECs requires broad contextual knowledge. Besides onedimensional representation, vertical features are equally essential to express the empty element. The hierarchical structure is a compact reflection of the syntactic content. By integrating the hierarchical context, we can analyze the regular distributional pattern of ECs in a syntactic tree. More specifically, it means considering the head information of the EC and relevant dependencies to augment the prediction. Both sequential and hierarchical contexts are essential to determine the existence of empty elements between two overt words. Even words close to each other in a hierarchical structure may appear far apart in sequential representations, which makes it hard for linear sequential tagging models to catch the hierarchical contextual information. RNN based sequence models have been proven very powerful to capture non-local features. In this paper, we show that LSTM is able to advance the pre-parsing ECD significantly. 2689 Interspace: @@ 颁布(issue) @@ 了(AS) @@ 涉及(involve) @@ 经济(economic) O VV O AS *OP**T* VV O NN Pre2 and Pre3: 颁布(issue) 了(AS) 涉及(involve) 经济(economic) VV AS VV#pre1=*T*#pre2=*OP* NN Prepost: 颁布(issue) 了(AS) 涉及(involve) 经济(economic) VV AS#post=*OP* VV#pre1=*T* NN Figure 2: An example of four kinds of annotations. The phrase is cut out from the sentence in Figure 1. ”@@” means interspaces between words. 2.2 A Sequence-Oriented Model In the sequence-oriented model, we formulate ECD as a sequence labeling problem. In general, we attach ECs to surrounding overt tokens to represent their identifications, i.e. their locations and types. We explore four sets of annotation specifications, denoted as Interspace, Pre2, Pre3 and Prepost, respectively. Following is the detailed descriptions. Interspace We convert ECs’ information into different tags of the interspaces between words. The assigned tag is the concatenation of ECs between the two words. If there is no EC, we just tag the interspace as O. Specially, according to our observation that only one EC occurs at the end of the sentence in our data set, we simply count on the heading space of sentences instead of the one standing at the end. Assume that there are n words in a given sentence, then there will be 2 ∗n items (n words and n interspaces) to tag. Pre2 and Pre3 We stick ECs to words following them. In experiments using POS information, ECs are attached to the POS of the next word, while the normal words are just tagged with their POS. In experiments without POS information, ECs are straightly regarded as the label of the following words. Words without ECs ahead are consistently tagged using an empty marker. Similar to Interspace, linearly consecutive ECs are concatenated as a whole. Pre2 means that at most two preceding consecutive ECs are considered while Pre3 limits the considered continuous length to three. The determination of window lengths are grounded in the distribution of ECs’ continuous lengths as shown in Table 2. Prepost Considering that it may be a challenge to capture long-distance features, we introduce another labeling rule called Prepost. Different from Pre2 and Pre3, the responsibility for presenting ECs will be shared by both the preceding and the 1 2 3 4 Train 7499 3702 142 5 Dev 530 233 10 0 Test 900 433 19 0 Table 2: The distribution of ECs’ continuous lengths in training, development and test data. following words. Whereas, tags heading sentences will remain unchanged. Particularly, if the amount of consecutive ECs in the current position is an odd number, we choose to attach the extra EC to the following word for consistency and clarity. Take part of the sentence in Figure 1 as an example. As described above, the four kinds of representations are depicted in Figure 2. To investigate the effect of POS in the tagging process, we also conduct experiments by integrating POS to the tagging process. For Interspace, POS tags are individual output labels, while for other representations, the POS information is used to divide an empty category integrated tag into subtypes. 2.3 Tagging Based on LSTM-CRF In order to capture long-range syntactic information for accurate disambiguation in pre-parsing phase, we build a LSTM-CRF model inspired by the neural network proposed in Ma and Hovy (2016). A BiLSTM layer is set up on character embeddings for extracting character-level representations of each word, which is concatenated with the pre-trained word embedding before feeding into another BiLSTM layer to capture contextual information. Thus we have obtained dense and continuous representations of the words in given sentences. The last part is to decode with linear chain CRF which can optimize the output sequence by factoring in local characteristics. Dropout layers both before and after the sentencelevel network serve to prevent over-fitting. 2690 3 In-Parsing Detection Zhang et al. (2017b) designs novel algorithms to produce dependency trees in which empty elements are allowed. Their results show that integrating empty categories can augment the parsing of overt tokens when structured perceptron, a global linear model, is applied for disambiguation. From a different perspective, by jointing ECD and dependency parsing, we can utilize full syntactic information in the process of detecting ECs. Parallel to their work, we explore the effect of ECD on the neural dependency based parsing in this section. 3.1 Joint ECD and Dependency Parsing To perform ECD and dependency parsing in a unified framework, we formulate the issue as an optimization problem. Assume that we are given a sentence s with n normal words. We use an index set Io = {(i, j)|i, j ∈{1, · · · , n}} to denote all possible overt dependency edges, and use Ic = {(i, φj)|i, j ∈{1, · · · , n}} to denote all possible covert dependency edges. φj denotes an empty node that precede the jth word. Then a dependency parse with empty nodes can be represented as a vector: z = {z(i, j) : (i, j) ∈Io ∪Ic}. Let Z denote the set of all possible z, and PART(z) denote the factors in the dependency tree, including edges (and edge siblings in the second-order model). Then parsing with ECD can be defined as a search for the highest-scored z∗(s) in all compatible analyses, just like parsing without empty elements: z∗(s) = arg max z∈Z(s) SCORE(s, z) = arg max z∈Z(s) X p∈PART(z) SCOREPART(s, p) The graph-based parsing algorithms proposed by Zhang et al. are based on two properties: ECs can only serve as dependents and the number of successive ECs is limited. The latter trait makes it reasonable to treat consecutive ECs governed by the same head as one word. We also follow this set-up. 3.2 Scoring Based on BiLSTM Kiperwasser and Goldberg (2016) proposed a simple yet effective architecture to implement neural Bi-LSTMs Embedding i It PRP . . . . . . Bi-LSTMs Embedding j Black NNP Bi-LSTMs Embedding j + 1 Monday NNP . . . . . . MLPovert edge SCOREDEP(i, j) MLPcovert edge SCOREEMPTY(i, φj+1) MLPovert both sibling SCOREOVERTBOTH(i, j, j + 1) Figure 3: The neural network structure when parsing sentence ”It wasn’t Black Monday.” 5 MLPs is used for overt edges (i, j), covert edges (i, φj), overt-both siblings (i, j, k), covert-inside siblings (i, φj, k) and covert-outside siblings (i, j, φk) respectively, and 3 of them are shown in the graph. dependency parsers. In particular, a BiLSTM is utilized as a powerful feature extractor to assist a dependency parser. Mainstream data-driven dependency parsers, including both transition- and graph-based ones, can apply useful word vectors provided by a BiLSTM to conduct the disambiguation. Following Kiperwasser and Goldberg (2016)’s experience on graph-based dependency parser, we implement such a parser to recover empty categories and to evaluate the impact of empty categories on surface parsing. Here we present details of the design of our parser. A vector is associated with each word or POS-tag to transform them into continuous and dense representations. We use pre-trained word embeddings and random initialized POS-tag embeddings. The concatenation of the word embedding and the POS-tag embedding of each word in a specific sentence is used as the input of BiLSTMs to extract context related feature vectors ri. r1:n = BiLSTM(s; 1 : n) The context related feature vectors are fed into a non-linear transformation to perform scoring. 3.3 A First-Order Model In the first-order model, we only consider the head and the dependent of the possible dependency arc. The two feature vectors of each word pair is scored with a non-linear transformation g as the firstorder score. When words i and j are overt words, 2691 we define the score function in sentence s as follows, SCOREDEP(s, i, j) = W2 · g(W1,1 · ri + W1,2 · rj + b) W2, W1,1 and W1,2 denote the weight matrices in linear transformations. The score of covert edge from word i to word φj is calculated in a similar way with different parameters: SCOREEMPTY(s, i, φj) = W ′ 2 · g(W ′ 1,1 · ri + W ′ 1,2 · rj + b′) These non-linear transformations are also known as Multiple Layer Perceptrons(MLPs). The total score in our first-order model is defined as follows, SCORE(s, z) = X (i,j)∈DEP(z) SCOREDEP(s, i, j) + X (i,φj)∈DEPEMPTY(z) SCOREEMPTY(s, i, φj) DEP(z) and DEPEMPTY(z) denote all overt and covert edges in z respectively. Because each overt and covert edge is selected independently of the others, the decoding process can be seen as calculating the maximum subtree from overt edges(we use Eisner Algorithm in our experiments) and appending each covert edge (i, φj) when SCOREEMPTY(i, φj) > 0. 3.4 A Second-Order Model In the second-order model, we also consider sibling arcs. We extend the neural network in section 3.3 to perform the second-order parsing. We calculate second-order scores(scores defined over sibling arcs) in a similar way. Each pair of overt sibling arcs, for example, (i, j) and (i, k) (j < k), is denoted as (i, j, k) and scored with a non-linear transformation. SCOREOVERTBOTH(s, i, j, k) = W ′′ 2 · g(W ′′ 1,1 · ri + W ′′ 1,2 · rj + W ′′ 1,3 · rk + b′′) Zhang et al. (2017b) defines two kinds of second-order scores to describe the interaction between concrete nodes and empty categories: the covert-inside sibling (i, φj, k) and covert-outside sibling (i, j, φk). Their scores can be calculated in a similar way with different parameters. And finally, the score function over the whole syntactic analysis is defined as: SCORE(s, z) = X (i,j)∈DEP(z) SCOREDEP(s, i, j) + X (i,φj)∈DEPEMPTY(z) SCOREEMPTY(s, i, φj) + X (i,j,k)∈OVERTBOTH(z) SCOREOVERTBOTH(s, i, j, k) + X (i,φj,k)∈COVERTIN(z) SCORECOVERTIN(s, i, φj, k) + X (i,j,φk)∈COVERTOUT(z) SCORECOVERTOUT(s, i, j, φk) OVERTBOTH(z), COVERTIN(z) and COVERTOUT(z) denotes overt-both, covertinside and covert-outside siblings of z respectively. Totally 5 MLPs are used to calculate the 5 types of scores. The network structure is shown in Figure 3. Labeled Parsing Similar to Kiperwasser and Goldberg (2016) and Zhang et al. (2017a), we use a two-step process to perform labeled parsing: conduct an unlabeled parsing and assign labels to each dependency edge. The labels are determined with the nonlinear classification. We use different nonlinear classifiers for edges between concrete nodes and empty categories. Training In order to update graphs which have high model scores but are very wrong, we use a margin-based approach to compute loss from the gold tree T ∗and the best prediction ˆT under the current model. We define the loss term as: max(0, ∆(T ∗, ˆT) −SCORE(T ∗) + SCORE( ˆT)) The margin objective ∆measures the similarity between the gold tree T ∗and the prediction ˆT. Following Kiperwasser and Goldberg (2016)’s experience of loss augmented inference, we define ∆as the count of dependency edges in prediction results but not belonging to the gold tree. 3.5 Structure Regularization ECD significantly increases the search space for parsing. This results in a side effect for practical parsing. Given the limit of available annotations for training, searching for more complex structures in a larger space is harmful to the generalization ability in structured prediction (Sun, 2692 2014). To control structure-based overfitting, we train a normal dependency parser, namely parser for overt words only, and use its first- and secondorder scores to augment the corresponding score functions in the joint parsing and ECD model. At the training phase, the two parsers are trained separately, while at the test phase, the scores are calculated by individual models and added for decoding. 4 Experiments 4.1 Experimental Setup 4.1.1 Data We conduct experiments on a subset of Penn Chinese Treebank (CTB; Xue et al., 2005) 9.0. As a pro-drop language, the empty category is a very useful method for representing the (deep) syntactic analysis in Chinese language. Empty categories in CTB is divided into six classes: pro, PRO, OP, T, RNR and *, which were described in detail in Xue and Yang (2013); Wang et al. (2015). For comparability with the state-of-the-art, the division of training, development and testing data is coincident with the previous work (Xue and Yang, 2013). Our experiments can be divided into two groups. The first group is conducted on the linear conditional random field (Linear-CRF) model and LSTM-CRF tagging model to evaluate gains from the introduction of neural structures. The second group is designed for the dependency-based inparsing models. 4.1.2 Evaluation Metrics We adopt two kinds of metrics for the evaluation of our experiments. The first one focuses on EC’s position and type, in accordance with the labeled empty elements measure proposed by Cai et al. (2011), which can be implemented on all models in our experiments. The second one is stricter. Besides position and type, it also checks EC’s head information. An EC is considered to be correct, only when all the three parts are the same as the corresponding gold standard. Thus only models involved in dependency structures can be evaluated according to the latter metric. Based on above measures of the two degrees, we evaluate our neural pre- and in-parsing models regarding each type of EC as well as overall performance. Besides, to compare different models’ abilities to capture non-local information, we design Dependency Distance to indicate the number of words from one EC to its head, not counting other ECs on the path. Taking the two ECs in Figure 1 as an example, ∅2 has a Dependency Distance of 0 while ∅1 ’s Dependency Distance is 3. We calculate labeled recall scores for enumerated Dependency Distance. A higher score means greater capability to catch and to represent long-distance details. 4.2 Results of Pre-Parsing Models Table 3 shows overall performances of the two sequential models on development data. From the results, we can clearly see that the introduction of neural structure pushes up the scores exceptionally. The reason is that our LSTM-CRF model not only benefits from the linear weighted combination of local characteristics like ordinary CRF models, but also has the ability to integrate more contextual information, especially long-distance information. It confirms LSTM-based models’ great superiority in sequence labeling problems. Further more, we find that the difference among the four kinds of representations is not so obvious. The most performing one with LSTM-CRF model is Interspace, but the advantage is narrow. Pre3 uses a larger window length to incorporate richer contextual tokens, but at the same time, the searching space for decoding grows larger. It explains that the performance drops slightly with increasing window length. In general, experiments with POS tags show higher scores as more syntactic clues are incorporated. We compare LSTM-CRF with other state-ofthe-art systems in Table 41. We can see that a simple neural pre-parsing model outperforms state-ofthe-art linear in-parsing systems. Analysis about results on different EC types as displayed in Table 5 shows that the sequence-oriented pre-parsing model is good at detecting pro compared with previous systems, which is used widely in pro-drop languages. Additionally, the model succeeds in detecting seven * EC tokens in evaluating process. * indicates trace left by passivization as well as raising, and is very rare in training data. Previous models usually cannot identify any *. This detail reflects that the LSTM-CRF model can make the most of limited training data compared with existing systems. 1 Wang et al. (2015) reported an overall F-score of 71.7. But their result is based on the gold standard syntactic analysis. 2693 Linear CRF LSTM-CRF Without POS With POS Without POS With POS P R F1 P R F1 P R F1 P R F1 Interspace 74.6 20.6 32.2 71.2 30.3 42.5 67.9 59.8 63.6 73.0 61.6 66.8 Pre2 72.4 30.1 42.5 72.8 32.4 44.8 71.1 58.3 64.1 74.8 57.4 65.0 Pre3 73.1 30.2 42.8 73.0 32.5 44.9 71.1 58.5 64.2 73.8 57.0 64.3 Prepost 70.9 32.9 45.0 74.4 30.3 43.1 71.0 57.6 63.6 72.9 58.6 65.0 Table 3: The overall performance of the two sequential models on development data. P R F1 Pre-parsing 67.3 54.7 60.4 In-parsing 72.6 55.5 62.9 In-parsing* 70.9 54.1 61.4 (Xue and Yang, 2013)* 65.3 51.2 57.4 (Cai et al., 2011) 66.0 54.5 58.6 Table 4: The overall performance on test data. ”*” indicates more stringent evaluation metrics. EC Type Total Correct P R F1 pro 315 85 52.5 27.0 35.6 PRO 300 183 58.8 61.0 59.9 OP 575 338 73.0 58.8 65.1 T 580 355 73.3 61.2 66.7 RNR 34 30 62.5 88.2 73.2 * 19 7 46.7 36.8 41.2 Overall 1823 998 67.3 54.7 60.4 Table 5: Occurrences of different ECs in test data and detailed results of Interspace with POS information. 4.3 Results of In-Parsing Models Table 6 presents detailed results of the in-parsing models on test data. Compared with the stateof-the-art, the first-order model performs a little worse while the second-order model achieves a remarkable score. The first-order parsing model only constrains the dependencies of both the covert and overt tokens to make up a tree. Due to the loose scoring constraint of the first-order model, the prediction of empty nodes is affected little from the prediction of dependencies of overt words. The four bold numbers in the table intuitively elicits the conclusion that integrating an empty edge and its sibling overt edges is necessary to boost the performance. It makes sense because empty categories are highly related to syntactic analysis. When we conduct ECD and dependency parsing simultaneously, we can leverage First-order Second-order Type P R F1 P R F1 pro 52.5 16.8 25.5 54.4 19.7 28.9 PRO 59.7 47.3 52.8 60.6 58.0 59.3 OP 74.5 55.8 63.8 79.6 67.8 73.2 T 70.6 51.7 59.7 77.3 62.8 69.3 RNR 70.8 50.0 58.6 77.8 61.8 68.9 * 0.0 0.0 0.0 0.0 0.0 0.0 Overall 68.2 45.7 54.7 72.6 55.5 62.9 Evaluation with Head pro 50.5 16.2 24.5 52.6 19.1 28.0 PRO 58.4 46.3 51.7 57.8 55.3 56.6 OP 72.2 54.1 61.8 78.6 67.0 72.3 T 68.5 50.2 57.9 75.4 61.2 67.6 RNR 70.8 50.0 58.6 77.8 61.8 68.9 * 0.0 0.0 0.0 0.0 0.0 0.0 Overall 66.3 44.4 53.2 70.9 54.1 61.4 Table 6: The performances of the first- and second-order in-parsing models on test data. more hierarchical contextual information. Comparing results regarding EC types, we can find that OP and T benefit most from the parsing information, the F1 score increasing by about ten points, more markedly than other types. 4.4 Results on Dependency Parsing Table 7 shows the impact of automatic detection of empty categories on parsing overt words. We compare the results of both steps in labeled parsing. We can clearly see that integrating empty elements into dependency parsing can improve the neural parsing accuracy of overt words. Besides, when jointing parsing models both without and with ECs together, we can push up the performance further. These results confirm the conclusion in Zhang et al. (2017b) that empty elements help parse the overt words. The main reason lies in that the existence of ECs provides extra structural information which can reduce approximation 2694 -EC +EC -+EC Unlabeled 87.6 88.9 89.6 Labeled 84.6 85.9 86.6 Table 7: Accuracies of both unlabeled and labeled parsing on development data. -EC indicates parsing without empty categories. +EC indicates the second-order in-parsing models. -+EC indicates jointing parsing models both without and with ECs together. errors in a structured prediction problem. According to above analysis, we can draw a conclusion that ECD and syntactic parsing can promote each other mutually. That partially explains why in-parsing models can outperform preparsing models. Meanwhile, it provides a new approach to improving the dependency parsing quality in a unified framework. 4.5 Impact of Dependency Distance 0 10 20 30 40 50 60 70 80 0 1 2 3 4 5-9 10+ Recall Dependency Distance Pre-parsing In-parsing Figure 4: Recall scores of different models regarding Dependency Distance. ”Pre-parsing” and ”Inparsing” refer to the LSTM-CRF model and the dependency-based in-parsing model respectively. We compare pre- and in-parsing models regarding Dependency Distance. The former refers to the LSTM-CRF model while the latter means the dependency-based in-parsing model. Figure 4 shows the results. The abscissa value ranges from 0 to 26, with the longest dependency arc spanning 26 non-EC word tokens. We can see that longdistance disambiguation is a challenge shared by both models. When the value of Dependency Distance exceeds four, the recall score drops gradually with abscissa increasing. Based on the comparison of two sets of data, we can find that inparsing model performs better on ECs which are close to their heads. However, as for ECs which are far apart from their heads, two models have performed almost exactly alike. It demonstrates that LSTM structure is capable of capturing nonlocal features, making up for no exposure to parsing information. 4.6 Challenges On the whole, the most challenging EC type is pro. We assume that it is because that pro-drop situations are complicated and diverse in Chinese language. According to Chinese linguistic theory, pronouns are dropped as a result of continuing from the preceding discourse or just idiomatic rules, such as the ellipsis of the first person pronoun “我/I” in the subject position. To fill this gap, we may need to extract more deep structural features. Another difficulty is the detection of consecutive ECs. In the result of our experiments, inparsing dependency-based model can only accurately detect up to two consecutive ECs. Too many empty elements in the same sentence conceal too much syntactic information, making it hard to disclose the original structure. Moreover, in view of the fact that ECs play an essential role in syntactic analysis, the current detection accuracy of ECs is far from enough. We still have a long way to go. 5 Related Work The detection of empty categories is an essential ground for many downstream tasks. For example, Chung and Gildea (2010) has proved that automatic empty category detection has a positive impact on machine translation. Zhang et al. (2017b) shows that ECD can benefit linear syntactic parsing of overt words. To accurately distinguish empty elements in sentences, there are generally three approaches. The first method is to build pre-processors before syntactic parsing. Dienes and Dubey (2003) proposed a shallow trace tagger which can detect discontinuities. And it can be combined with unlexicalized PCFG parsers to implement deep syntactic processing. Due to the lack of phrase structure information, it did not acquire remarkable results. The second method is to integrate ECD into parsing, as shown in Schmid (2006) and Cai et al. (2011), which involved empty elements in the process of generating parse trees. Another in-parsing system is pro2695 posed in Zhang et al. (2017b). Zhang et al. (2017b) designed algorithms to produce dependency trees in which empty elements are allowed. To add empty elements into dependency structures, they extend Eisner’s first-order DP algorithm for parsing to second- and third-order algorithms. The last approach to recognizing empty elements is post-parsing methods. Johnson (2002) proposed a simple pattern-matching algorithm for recovering empty nodes in phrase structure trees while Campbell (2004) presented a rule-based algorithm. Xue and Yang (2013) conducted ECD based on dependency trees. Their methods can leverage richer syntactic information, thus have achieved more satisfying scores. As neural networks have been demonstrated to have a great ability to capture complex features, it has been applied in multiple NLP tasks (Bengio and Schwenk, 2006; Collobert et al., 2011). Neural methods have also explored in distinguishing empty elements. For example, Wang et al. (2015) described a novel ECD solution using distributed word representations and achieved the state-ofthe-art performance. Based on above work, we explore neural pre- and in-parsing models for ECD. 6 Conclusion Neural networks have played a big role in multiple NLP tasks recently owing to its nonlinear mapping ability and the avoidance of human-engineered features. It should be a well-justified solution to identify empty categories as well as to integrate empty categories into syntactic analysis. In this paper, we study neural models to detect empty categories. We observe three facts: (1) BiLSTM significantly advances the pre-parsing ECD. (2) Automatic ECD improves the neural dependency parsing quality for overt words. (3) Even with a BiLSTM, syntactic information can enhance the detection further. Experiments on Chinese language show that our neural model for ECD exceptionally boosts the state-of-the-art detection accuracy. Acknowledgement This work was supported by the National Natural Science Foundation of China (61772036, 61331011) and the Key Laboratory of Science, Technology and Standard in Press Industry (Key Laboratory of Intelligent Press Media Technology). We thank the anonymous reviewers for their helpful comments. Weiwei Sun is the corresponding author. References Yoshua Bengio and Holger Schwenk. 2006. Neural probabilistic language models. In Innovations in Machine Learning. Springer, page 137186. Shu Cai, David Chiang, and Yoav Goldberg. 2011. Language-independent parsing with empty elements. In Proceedings of the 49th Annual Meeting of the Association for Computational Linguistics: Human Language Technologies. Association for Computational Linguistics, Portland, Oregon, USA, pages 212–216. http://www.aclweb.org/anthology/P11-2037. Richard Campbell. 2004. Using linguistic principles to recover empty categories. In Proceedings of the 42nd Meeting of the Association for Computational Linguistics (ACL’04), Main Volume. Barcelona, Spain, pages 645–652. https://doi.org/10.3115/1218955.1219037. A. Carnie. 2012. Syntax: A Generative Introduction 3rd Edition and The Syntax Workbook Set. Introducing Linguistics. Wiley. https://books.google.com/books?id=jhGKMAEACAAJ. Noam Chomsky. 1981. Lectures on Government and Binding. Foris Publications, Dordecht. Tagyoung Chung and Daniel Gildea. 2010. Effects of empty categories on machine translation. In Proceedings of the 2010 Conference on Empirical Methods in Natural Language Processing. Association for Computational Linguistics, Cambridge, MA, pages 636–645. http://www.aclweb.org/anthology/D10-1062. Ronan Collobert, Jason Weston, Michael Karlen, Koray Kavukcuoglu, and Pavel Kuksa. 2011. Natural language processing (almost) from scratch. Journal of Machine Learning Research 12(1):2493–2537. P´etr Dienes and Amit Dubey. 2003. Deep syntactic processing by combining shallow methods. In Proceedings of the 41st Annual Meeting of the Association for Computational Linguistics. Association for Computational Linguistics, Sapporo, Japan, pages 431–438. https://doi.org/10.3115/1075096.1075151. Timothy Dozat and Christopher D. Manning. 2016. Deep biaffine attention for neural dependency parsing. CoRR abs/1611.01734. http://arxiv.org/abs/1611.01734. Mark Johnson. 2002. A simple pattern-matching algorithm for recovering empty nodes and their antecedents. In Proceedings of 40th Annual Meeting of the Association for Computational Linguistics. Association for Computational Linguistics, 2696 Philadelphia, Pennsylvania, USA, pages 136–143. https://doi.org/10.3115/1073083.1073107. Eliyahu Kiperwasser and Yoav Goldberg. 2016. Simple and accurate dependency parsing using bidirectional lstm feature representations. Transactions of the Association for Computational Linguistics 4:313–327. https://transacl.org/ojs/index.php/tacl/article/view/885. Xuezhe Ma and Eduard Hovy. 2016. End-to-end sequence labeling via bi-directional lstm-cnns-crf. In Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers). Association for Computational Linguistics, Berlin, Germany, pages 1064–1074. http://www.aclweb.org/anthology/P16-1101. Mitchell P. Marcus, Mary Ann Marcinkiewicz, and Beatrice Santorini. 1993. Building a large annotated corpus of english: the penn treebank. Computational Linguistics 19(2):313–330. http://dl.acm.org/citation.cfm?id=972470.972475. Helmut Schmid. 2006. Trace prediction and recovery with unlexicalized pcfgs and slash features. In Proceedings of the 21st International Conference on Computational Linguistics and 44th Annual Meeting of the Association for Computational Linguistics. Association for Computational Linguistics, Sydney, Australia, pages 177–184. https://doi.org/10.3115/1220175.1220198. Wolfgang Seeker, Rich´ard Farkas, Bernd Bohnet, Helmut Schmid, and Jonas Kuhn. 2012. Data-driven dependency parsing with empty heads. In Proceedings of COLING 2012: Posters. The COLING 2012 Organizing Committee, Mumbai, India, pages 1081– 1090. http://www.aclweb.org/anthology/C12-2105. Xu Sun. 2014. Structure regularization for structured prediction. In Z. Ghahramani, M. Welling, C. Cortes, N. D. Lawrence, and K. Q. Weinberger, editors, Advances in Neural Information Processing Systems 27, Curran Associates, Inc., pages 2402– 2410. http://papers.nips.cc/paper/5563-structureregularization-for-structured-prediction.pdf. Xun Wang, Katsuhito Sudoh, and Masaaki Nagata. 2015. Empty category detection with joint contextlabel embeddings. In Proceedings of the 2015 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies. Association for Computational Linguistics, Denver, Colorado, pages 263– 271. http://www.aclweb.org/anthology/N15-1030. Naiwen Xue, Fei Xia, Fu-dong Chiou, and Marta Palmer. 2005. The penn Chinese treebank: Phrase structure annotation of a large corpus. Natural Language Engineering 11:207–238. https://doi.org/10.1017/S135132490400364X. Nianwen Xue. 2007. Tapping the implicit information for the PS to DS conversion of the Chinese treebank. In Proceedings of the Sixth International Workshop on Treebanks and Linguistics Theories. Nianwen Xue and Yaqin Yang. 2013. Dependencybased empty category detection via phrase structure trees. In Proceedings of the 2013 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies. Association for Computational Linguistics, Atlanta, Georgia, pages 1051– 1060. http://www.aclweb.org/anthology/N13-1125. Xingxing Zhang, Jianpeng Cheng, and Mirella Lapata. 2017a. Dependency parsing as head selection. In Proceedings of the 15th Conference of the European Chapter of the Association for Computational Linguistics: Volume 1, Long Papers. Association for Computational Linguistics, Valencia, Spain, pages 665–676. http://www.aclweb.org/anthology/E171063. Xun Zhang, Weiwei Sun, and Xiaojun Wan. 2017b. The covert helps parse the overt. In Proceedings of the 21st Conference on Computational Natural Language Learning (CoNLL 2017). Association for Computational Linguistics, Vancouver, Canada, pages 343–353. http://aclweb.org/anthology/K171035.
2018
250
Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Long Papers), pages 2697–2705 Melbourne, Australia, July 15 - 20, 2018. c⃝2018 Association for Computational Linguistics 2697 Composing Finite State Transducers on GPUs Arturo Argueta and David Chiang Department of Computer Science and Engineering University of Notre Dame {aargueta,dchiang}@nd.edu Abstract Weighted finite state transducers (FSTs) are frequently used in language processing to handle tasks such as part-of-speech tagging and speech recognition. There has been previous work using multiple CPU cores to accelerate finite state algorithms, but limited attention has been given to parallel graphics processing unit (GPU) implementations. In this paper, we introduce the first (to our knowledge) GPU implementation of the FST composition operation, and we also discuss the optimizations used to achieve the best performance on this architecture. We show that our approach obtains speedups of up to 6× over our serial implementation and 4.5× over OpenFST. 1 Introduction Finite-state transducers (FSTs) and their algorithms (Mohri, 2009) are widely used in speech and language processing for problems such as grapheme-to-phoneme conversion, morphological analysis, part-of-speech tagging, chunking, named entity recognition, and others (Mohri et al., 2002; Mohri, 1997). Hidden Markov models (Baum et al., 1970), conditional random fields (Lafferty et al., 2001) and connectionist temporal classification (Graves et al., 2006) can also be thought of as finite-state transducers. Composition is one of the most important operations on FSTs, because it allows complex FSTs to be built up from many simpler building blocks, but it is also one of the most expensive. Much work has been done on speeding up composition on a single CPU processor (Pereira and Riley, 1997; Hori and Nakamura, 2005; Dixon et al., 2007; Allauzen and Mohri, 2008; Allauzen et al., 2009; Ladner and Fischer, 1980; Cheng et al., 2007). Methods such as on-the-fly composition, shared data structures, and composition filters have been used to improve time and space efficiency. There has also been some successful work on speeding up composition using multiple CPU cores (Jurish and W¨urzner, 2013; Mytkowicz et al., 2014; Jung et al., 2017). This is a challenge because many of the algorithms used in NLP do not parallelize in a straightforward way and previous work using multi-core implementations do not handle the reduction of identical edges generated during the composition. The problem becomes more acute on the graphics processing units (GPUs) architecture, which have thousands of cores but limited memory available. Another problem with the composition algorithm is that techniques used on previous work (such as composition filters and methods to expand or gather transitions using dictionaries or hash tables) do not translate well to the GPU architecture given the hardware limitations and communication overheads. In this paper, we parallelize the FST composition task across multiple GPU cores. To our knowledge, this is the first successful attempt to do so. Our approach treats the composed FST as a sparse graph and uses some techniques from the work of Merrill et al. (2012); Jung et al. (2017) to explore the graph and generate the composed edges during the search. We obtain a speedup of 4.5× against OpenFST’s implementation and 6× against our own serial implementation. 2 Finite State Transducers In this section, we introduce the notation that will be used throughout the paper for the composition task. A weighted FST is a tuple M = (Q, Σ, Γ, s, F, δ), where 2698 • Q is a finite set of states. • Σ is a finite input alphabet. • Γ is a finite output alphabet. • s ∈Q is the start state. • F ⊆Q are the accept states. • δ : Q × Σ × Γ × Q →R is the transition function. If δ(q, a, b, r) = p, we write q a:b/p −−−−→r. Note that we don’t currently allow epsilon transitions; this would require implementation of composition filters (Allauzen et al., 2009), which is not a trivial task on the GPU architecture given the data structures and memory needed. Hence, we leave this for future work. For the composition task, we are given two weighted FSTs: M1 = (Q1, Σ, Γ, s1, F1, δ1) M2 = (Q2, Γ, ∆, s2, F2, δ2). Call Γ, the alphabet shared between the two transducers, the inner alphabet, and let m = |Γ|. Call Σ and ∆, the input alphabet of M1 and the output alphabet of M2, the outer alphabets. The composition of M1 and M2 is the weighted FST M1 ◦M2 = (Q1 × Q2, Σ, ∆, s1s2, F1 × F2, δ) where δ(q1q2, a, b, r1r2) = X b∈Γ δ1(q1, a, c, r1) · δ2(q2, c, b, r2). That is, for each pair of transitions with the same inner symbol, q1 a:b/p1 −−−−−→r1 q2 b:c/p2 −−−−−→r2, the composed transducer has a transition q1q2 a:c/p1p2 −−−−−−→r1r2. Transitions with the same source, target, input, and output symbols are merged, adding their weights. 0 1 2 3 0 1 2 3 0,0 1,1 2,2 3,3 1,2 2,1 1,0 2,0 3,0 0,1 3,1 0,2 3,2 0,3 1,3 2,3 M2 M1 the:la/0.3 one:una/0.7 cat:gata/1.0 cat:gata/1.0 la:die/0.6 una:eine/0.4 gata:Katze/1.0 gata:Katze/1.0 the:die/0.18 one:eine/0.28 cat:Katze/1.00 cat:Katze/1.0 cat:Katze/1.0 cat:Katze/1.0 Figure 1: Example composition of two finite state transducers: M1 translates English to Spanish, M2 translates Spanish to German. The center of the image contains the composition of the two input transducers. This new transducer translates English to German. The dotted states and transitions are those that cannot be reached from the start state. 3 Method In this section, we describe our composition method and its implementation. 3.1 Motivation If implemented na¨ıvely, the above operation is inefficient. Even if M1 and M2 are trim (have no states that are unreachable from the start state or cannot reach the accept state), their composition may have many unreachable states. Figure 1 shows a clear example where the transducers used for composition are trim, yet several states (drawn as dotted circles) on the output transducers cannot be reached from the start state. The example also shows composed transitions that originate from unreachable states. As a result, a large amount of time and memory may be spent creating states and composing transitions that will not be reachable nor needed in practice. One solution to avoid the problem is to compose only the edges and states that are reachable from the start state on the output transducer to avoid unnecessary computations and reduce the overall memory footprint. We expect this problem to be more serious when the FSTs to be composed are sparse, that is, when there are many pairs of states without a transition between them. And we expect that FSTs used in 2699 Data Transitions Nonzero % Nonzero 1k de-en 21.7M 16.5k 0.076% en-de 12.3M 15.4k 0.125% en-es 12.5M 15.5k 0.124% en-it 13.1M 16.3k 0.124% 10k de-en 394M 114k 0.029% en-de 138M 93.9k 0.067% en-es 135M 93.3k 0.068% en-it 143M 97.2k 0.067% 15k de-en 634M 158k 0.025% en-de 201M 126k 0.062% en-es 195M 125k 0.064% en-it 209M 131k 0.062% Table 1: FSTs used in our experiments. Key: Data = language pair used to generate the transducers; Transitions = maximum possible number of transitions; Nonzero = number of transitions with nonzero weight; % Nonzero = percent of possible transitions with nonzero weight. The left column indicates the number of parallel sentences used to generate the transducers used for testing. natural language processing, whether they are constructed by hand or induced from data, will often be sparse. For example, below (Section 4.1), we will describe some FSTs induced from parallel text that we will use in our experiments. We measured the sparsity of these FSTs, shown in Table 1. These FSTs contain very few non-zero connections between their states, suggesting that the output of the composition will have a large number of unreachable states and transitions. The percentage of non-zero transitions found in the transducers used for testing decreases as the transducer gets larger. Therefore, when composing FSTs, we want to construct only reachable states, using a traversal scheme similar to breadth-first search to avoid the storage and computation of irrelevant elements. 3.2 Serial composition We first present a serial composition algorithm (Algorithm 2). This algorithm performs a breadthfirst search (BFS) of the composed FST beginning from the start state, so as to avoid creating inaccessible states. As is standard, the BFS uses two data structures, a frontier queue (A) and a visited set (Q), which is always a superset of A. For each state q1q2 popped from A, the algorithm composes Algorithm 1 Serial composition algorithm. Input Transducers: M1 = (Q1, Σ, Γ, s1, F1, δ1) M2 = (Q2, Γ, ∆, s2, F2, δ2) Output Transducer: M1 ◦M2 1: A ←{s1s2} ▷Queue of states to process 2: Q ←{s1s2} ▷Set of states created so far 3: δ ←∅ ▷Transition function 4: while |A| > 0 do 5: q1q2 ←pop(A) 6: for q1 a:b/p1 −−−−−→r1 ∈δ1 do 7: for q2 b:c/p2 −−−−−→r2 ∈δ2 do 8: δ(q1q2, a, c, r1r2) += p1p2 9: if r1r2 < Q then 10: Q ←Q ∪{r1r2} 11: push(A, r1r2) 12: return (Q, Σ, ∆, s1s2, F1 × F2, δ) la gata una R 0 1 1 2 T 1 2 O the one P 0.3 0.7 la gata una R 0 1 1 2 T 1 2 O die eine P 0.6 0.4 M1 M2 Figure 2: Example CSR-like representation of state 0 for transducers M1 and M2 from Figure 1. all transitions from q1 with all transitions from q2 that have the same inner symbol. The composed edges are added to the final transducer, and the corresponding target states q1q2 are pushed into A for future expansion. The search finishes once A runs out of states to expand. 3.3 Transducer representation Our GPU implementation stores FST transition functions in a format similar to compressed sparse row (CSR) format, as introduced by our previous work Argueta and Chiang (2017). For the composition task we use a slightly different representation. An example of the adaptation is shown in Figure 2. The transition function δ for the result is stored in a similar fashion. The storage method is defined as follows: 2700 • z is the number of transitions with nonzero weight. • R is an array of length |Q|m + 1 containing offsets into the arrays T, O, and P. If the states are numbered 0, . . . , |Q| −1 and the inner symbols are numbered 0, . . . m −1, then state q’s outgoing transitions on inner symbol b can be found starting at the offset stored in R[qm + b]. The last offset index, R[|Q|m + 1], must equal z. • T[k] is the target state of the kth transition. • O[k] is the outer symbol of the kth transition. • P[k] is the weight of the kth transition. Similarly to several toolkits (such as OpenFST), we require the edges in T, O, P to be sorted by their inner symbols before executing the algorithm, which allows faster indexing and simpler parallelization. 3.4 Parallel composition Our parallel composition implementation has the same overall structure as the serial algorithm, and is shown in Algorithm 2. The two transducers to be composed are stored on the GPU in global memory, in the format described in Section 3.3. Both transducers are sorted according to their inner symbol on the CPU and copied to the device. The memory requirements for a large transducer complicates the storage of the result on the GPU global memory. If the memory of states and edges generated by both inputs does not fit on the GPU, then the composition cannot be computed using only device memory. The execution time will also be affected if the result lives on the device and there is a limited amount of memory available for temporary variables created during the execution. Therefore, the output transducer must be stored on the host using page-locked memory, with the edge transitions unsorted. Page-locked, or pinned, memory is memory that will not get paged out by the operating system. Since this memory cannot be paged out, the amount of RAM available to other applications will be reduced. This enables the GPU to access the host memory quickly. Pinned memory provides better transfer speeds since the GPU creates different mappings to speed up cudaMemcpy operations on host memory. Allocating pinned memory consumes more time than a regular malloc, Algorithm 2 Parallel composition algorithm. Input Transducers: M1 = (Q1, Σ, Γ, s1, F1, δ1) M2 = (Q2, Γ, ∆, s2, F2, δ2) Output Transducer: M1 ◦M2 1: A ←{s1s2} ▷Queue of states to process 2: Q ←{s1s2} ▷Set of states visited 3: δ ←[] ▷List of transitions 4: while |A| > 0 do 5: q1q2 ←pop(A) 6: δd ←[] 7: Ad ←∅ 8: H ←∅ 9: red ←false 10: parfor b ∈Γ do ▷kernels 11: parfor q1 a:b/p1 −−−−−→r1 do ▷threads 12: parfor q2 b:c/p2 −−−−−→r2 do ▷threads 13: append q1q2 a:c/p1p2 −−−−−−→r1r2 to δd 14: if h(a, c, r1r2) ∈H then 15: red ←true 16: else 17: add h(a, c, r1r2) to H 18: if r1r2 < Q then 19: Ad ←Ad ∪{r1r2} 20: Q ←Q ∪{r1r2} 21: concatenate δd to δ 22: for q ∈Ad do push(A, q) 23: if red then 24: sort δ[q1q2] 25: reduce δ[q1q2] 26: return (Q, Σ, ∆, s1s2, F1 × F2, δ) therefore it should be done sporadically. In this work, pinned memory is allocated only once at start time and released once the composition has been completed. Using page-locked memory on the host side as well as pre-allocating memory on the device decreases the time to both copy the results back from the GPU, and the time to reuse device structures used on different kernel methods. Generating transitions The frontier queue A is stored on the host. For each state q1q2 popped from A, we need to compose all outgoing transitions of q1 and q2 obtained from M1 and M2 respectively. Following previous work (Merrill et al., 2012; Jurish and W¨urzner, 2013), we create these in parallel, using the three parfor loops in lines 10–12. Although these three loops are written the same way in pseudocode for simplic2701 ity, in actuality they use two different parallelization schemes in the actual implementation of the algorithm. The outer loop launches a CUDA kernel for each inner symbol b ∈Γ. For example, to compose the start states in Figure 1, three kernels will be launched (one for la, gata, and una). Each of these kernels composes all outgoing transitions of q1 with output b with all outgoing transitions of q2 with input b. Each of these kernels is executed in a unique stream, so that a higher parallelization can be achieved. Streams are used in CUDA programming to minimize the number of idle cores during execution. A stream is a group of commands that execute in-order on the GPU. What makes streams useful in CUDA is the ability to execute several asynchronously. If more than one kernel can run asynchronously without any interdependence, the assignment of kernel calls to different streams will allow a higher speedup by minimizing the amount of idling cores during execution. All kernel calls using streams are asynchronous to the host making synchronization between several different streams necessary if there exist data dependencies between different parts of the execution pipeline. Asynchronous memory transactions can also benefit from streams, if these operations do not have any data dependencies. We choose a kernel block size of 32 for the kernel calls since this is the amount of threads that run in parallel on all GPU streaming multiprocessors at any given time. If the number of threads required to compose a tuple of states is not divisible by 32, the number of threads is rounded up to the closest multiple. When several input tuples generate less than 32 edges, multiple cores will remain idle during execution. Our approach obtains better speedups when the input transducers are able to generate a large amount of edges for each symbol b and each state tuple on the result. In general, the kernels may take widely varying lengths of time based on the amount of composed edges; using streams enables the scheduler to minimize the number of idle cores. The two inner loops represent the threads of the kernel; each composes a pair of transitions sharing an inner symbol b. Because these transitions are stored contiguously (Figure 2), the reads can be coalesced, meaning that the memory reads from the parallel threads can be combined into one transaction for greater efficiency. Figure 2 shows how the edges for a transducer are stored in global memory to achieve coalesced memory operations each time the edges of a symbol b associated with a state tuple q1,q2 need to be composed. Figure 2 shows how the edges leaving the start state tuple for transducers M1 and M2 are stored. As mentioned above, three kernels will be launched to compose the transitions leaving the start states, but only two will be executed (because there are no transitions on gata for both start states). For R[la] on machine M1, only one edge can output la given R[la + 1] −R[la] = 1, and machine M2 has one edge that reads la given R[la + 1] −R[la] = 1. For this example, R[la] points to index 0 on T, O, P for both states. This means that only one edge will be generated from the composition (0, 0 the:die/0.18 −−−−−−−−−−→1, 1). For symbol gata, no edges can be composed given R[gata + 1] −R[gata] = 0 on both machines, meaning that no edges read or output that symbol. Finally, for R[una] on machine M1 and M2, one edge can be generated (0, 0 one:eine/0.28 −−−−−−−−−−−→2, 2) given the offsets in R for both input FSTs. If n1 edges can be composed for a symbol b on one machine and n2 from the other one, the kernel will generate n1n2 edges. The composed transitions are first appended to a pre-allocated buffer δd on the GPU. After processing the valid compositions leaving q1q2, all the transitions added in δd are appended in bulk to δ on the host. Updating frontier and visited set Each destination state r1r2, if previously unvisited, needs to be added to both A and Q. Instead of adding it directly to A (which is stored on the host), we add it to a buffer Ad stored on the device to minimize the communication overhead between the host and the device. After processing q1q2 and synchronizing all streams, Ad is appended in bulk to A using a single cudaMemcpy operation. The visited set Q is stored on the GPU device as a lookup table of length |Q1||Q2|. Merrill et al. (2012) perform BFS using two stages to obtain the states and edges needed for future expansion. Similarly, our method performs the edge expansion using two steps by using the lookup table Q. The first step of the kernel updates Q and all visited states that need to be added to Ad. The second step appends all the composed edges to δ in parallel. Since several threads check the table in parallel, 2702 an atomic operation (atomicOr) is used to check and update each value on the table in a consistent fashion. Q also functions as a map to convert the state tuple q1q2 into a single integer. Each time a tuple is not in Q, the structure gets updated with the total number of states generated plus one for a specific pair of states. Reduction Composed edges with the same source, target, input, and output labels must be merged, summing their probabilities. This is done in lines 23–25, which first sort the transitions and then merge and sum them. To do this, we pack the transitions into an array of keys and an array of values. Each key is a tuple (a, c, r1r2) packed into a 64-bit integer. We then use the sort-by-key and reduce-by-key operations provided by the Thrust library. The mapping of tuples to integers is required for the sort operation since the comparisons required for the sorting can be made faster than using custom data structures with a custom comparison operator. 1 Because the above reduction step is rather expensive, lines 14–17 use a heuristic to avoid it if possible. H is a set of transitions represented as a hash table without collision resolution, so that lookups can yield false positives. If red is false, then there were no collisions, so the reduction step can be skipped. The hash function is simply h(a, c, r1r2) = a + c|Σ|. In more detail, H actually maps from hashes to integers. Clearing H (line 8) actually just increments a counter i; storing a hash k is implemented as H[k] ←i, so we can test whether k is a member by testing whether H[k] = i. An atomic operation (atomicExch) is used to consistently check H since several threads update this variable asynchronously. 4 Experiments We tested the performance of our implementation by constructing several FSTs of varying sizes and comparing our implementation against other baselines. 4.1 Setup In our previous work (Argueta and Chiang, 2017), we created transducers for a toy translation task. We trained a bigram language model (as in Figure 3a) and a one-state translation model (as in Figure 3) with probabilities estimated from 1https://thrust.github.io/ 0 1 2 3 la/0.8 una/0.2 gata/1.0 gata/1.0 la:the/0.6 una:the/0.4 gata:cat/1 (a) (b) Figure 3: The transducers used for testing were obtained by pre-composing: (a) a language model and (b) a translation model. These two composed together form a transducer that can translate an input sequence from one language (here, Spanish) into another language (here, English). GIZA++ Viterbi word alignments. Both were trained on the Europarl corpus. We then precomposed them using the Carmel toolkit (Graehl, 1997). We used the resulting FSTs to test our parallel composition algorithm, composing a German-toEnglish transducer with a English-to-t transducer to translate German to language t, where t is German, Spanish, or Italian. Our experiments were tested using two different architectures. The serial code was measured using a 16-core Intel Xeon CPU E5-2650 v2, and the parallel implementation was executed on a system with a GeForce GTX 1080 Ti GPU connected to a 24-core Intel Xeon E5-2650 v4 processor. 4.2 Baselines In this work, OpenFST (Allauzen et al., 2007) and our serial implementation (Algorithm 1) were used as a baseline for comparison. OpenFST is a toolkit developed by Google as a successor of the AT&T Finite State Machine library. For consistency, all implementations use the OpenFST text file format to read and process the transducers. 4.3 Results OpenFST’s composition operation can potentially create multiple transitions (that is, two or more transitions with the same source state, destination state, input label, and output label); a separate function (ArcSumMapper) must be applied to merge multiple transitions and sum their weights. Previous work also requires an additional step if identical edges need to be merged. For this reason, 2703 Training size (lines) 1000 10000 15000 Method Hardware Target Time Ratio Time Ratio Time Ratio OpenFST Xeon E5 DE 0.52 0.78 69.51 3.56 157.16 4.38 our serial Xeon E5 DE 0.21 0.31 28.47 1.45 72.33 2.02 our parallel GeForce GTX 1080 DE 0.67 1.00 19.54 1.00 35.89 1.00 OpenFST Xeon E5 ES 0.46 0.72 55.62 2.97 137.16 4.07 our serial Xeon E5 ES 0.19 0.30 23.30 1.24 62.42 1.85 our parallel GeForce GTX 1080 ES 0.64 1.00 18.72 1.00 33.71 1.00 OpenFST Xeon E5 IT 0.54 0.79 60.66 3.05 136.06 3.91 our serial Xeon E5 IT 0.21 0.31 25.58 1.28 119.84 3.45 our parallel GeForce GTX 1080 IT 0.68 1.00 19.88 1.00 34.76 1.00 Table 2: This table shows how the total running time of our GPU implementation compares against all other methods. Times (in seconds) are for composing two transducers using English as the shared input/output vocabulary and German as the source language of the first transducer (de-en,en-*). Ratios are relative to our parallel algorithm on the GeForce GTX 1080 Ti. we compare our implementation against OpenFST both with and without the reduction of transitions with an identical source,target,input, and output. We analyzed the time to compose all possible edges without performing any reductions (Algorithm 1, line 8). The second setup analyzes the time it takes to compute the composition and the arc summing of identical edges generated during the process. Table 2 shows the performance of the parallel implementation and the baselines without reducing identical edges. For the smallest transducers, our parallel implementation is slower than the baselines (0.72× compared to OpenFST and 0.30× compared to our serial version). With larger transducers, the speedups increase up to 4.38× against OpenFST and 2.02× against our serial implementation. Larger speedups are obtained for larger transducers because the GPU can utilize the streaming multiprocessors more fully. On the other hand, the overhead created by CUDA calls, device synchronization, and memory transfers between the host CPU and the device might be too expensive when the inputs are too small. Table 3 shows the performance of all implementations with the reduction operation. Again, for the smallest transducers we can see a similar behavior, our parallel implementation is slower (0.30× against OpenFST and 0.39× against our serial version). Speedups improve with the larger transducers, eventually achieving a 4.52× speedup over OpenFST and a 6.26× speedup over our serial implementation of the composition algorithm. 4.4 Discussion One comparison missing above is a comparison against a multicore processor. We attempted to compare against a parallel implementation using OpenMP on a single 16-core processor, but it did not yield any meaningful speedup, and even slowdowns of up to 10%. We think the reason for this is that because the BFS-like traversal of the FST makes it impractical to process states in parallel, the best strategy is to process and compose transitions in parallel. This very fine-grained parallelism does not seem suitable for OpenMP, as the overhead due to thread initialization and synchronization is higher than the time to execute the parallel sections of the code where the actual composition is calculated. According to our measurements, the average time to compose two transitions is 7.4 nanoseconds, while the average time to create an OpenMP thread is 10.2 nanoseconds. By contrast, the overhead for creating a CUDA thread seems to be around 0.4 nanoseconds. While a different parallelization strategy may exist for multicore architectures, at present, our finding is that GPUs, or other architectures with a low cost to create and destroy threads, are much more suitable for the fine grained operations used for the composition task. 2704 Training size (lines) 1000 10000 15000 Method Hardware Target Time Ratio Time Ratio Time Ratio OpenFST Xeon E5 DE 0.87 0.41 148.11 3.19 374.72 4.52 our serial Xeon E5 DE 0.96 0.45 213.27 4.59 518.97 6.26 our parallel GeForce GTX 1080 DE 2.11 1.00 47.70 1.00 82.88 1.00 OpenFST Xeon E5 ES 0.60 0.30 116.45 2.66 279.85 3.57 our serial Xeon E5 ES 0.77 0.39 202.15 4.61 390.29 4.97 our parallel GeForce GTX 1080 ES 2.00 1.00 45.30 1.00 78.38 1.00 OpenFST Xeon E5 IT 0.76 0.36 130.61 2.87 309.28 3.79 our serial Xeon E5 IT 1.06 0.50 158.57 3.48 427.51 5.24 our parallel GeForce GTX 1080 IT 2.12 1.00 47.04 1.00 81.54 1.00 Table 3: This table shows how the total running time of our GPU implementation compares against all other methods. Times (in seconds) are for composing two transducers and performing edge reduction using English as the shared input/output vocabulary and German as the source language of the first transducer (de-en,en-*). Ratios are relative to our parallel algorithm on the GeForce GTX 1080 Ti. 5 Future Work For future work, other potential bottlenecks could be addressed. The largest bottleneck is the queue used on the host to keep track of the edges to expand on the GPU. Using a similar data structure on the GPU to keep track of the states to expand would yield higher speedups. The only challenge of using such a data structure is the memory consumption on the GPU. If the two input transducers contain a large number of states and transitions, the amount of memory needed to track all the states and edges generated will grow significantly. Previous work (Harish and Narayanan, 2007) has shown that state queues on the GPU cause a large memory overhead. Therefore, if state expansion is moved to the GPU, the structures used to keep track of the states must be compressed or occupy the least amount of memory possible on the device in order to allocate all structures required on the device. The queue will also require a mechanism to avoid inserting duplicate tuples into the queue. For the reduction step, speedups can be achieved if the sort and reduce operations can be merged with the edge expansion part of the method. The challenge of merging identical edges during expansion is the auxiliary memory that will be required to store and index intermediate probabilities. It can be doable if the transducers used for the composition are small. In that case, the reduce operation might not yield significant speedups given the fact that the overhead to compose small transducers is too high when using a GPU architecture. 6 Conclusion This is the first work, to our knowledge, to deliver a parallel GPU implementation of the FST composition algorithm. We were able to obtain speedups of up to 4.5× over a serial OpenFST baseline and 6× over the serial implementation of our method. This parallel method considers several factors, such as host to device communication using page-locked memory, storage formats on the device, thread configuration, duplicate edge detection, and duplicate edge reduction. Our implementation is available as open-source software.2 Acknowledgements We thank the anonymous reviewers for their helpful comments. This research was supported in part by an Amazon Academic Research Award and a hardware grant from NVIDIA. References Cyril Allauzen and Mehryar Mohri. 2008. 3-way composition of weighted finite-state transducers. In Implementation and Applications of Automata, pages 262–273. Springer. 2https://bitbucket.org/aargueta2/parallel_ composition 2705 Cyril Allauzen, Michael Riley, and Johan Schalkwyk. 2009. A generalized composition algorithm for weighted finite-state transducers. In Proceedings of the Conference of the International Speech Communication Association (ISCA), pages 1203–1206. Cyril Allauzen, Michael Riley, Johan Schalkwyk, Wojciech Skut, and Mehryar Mohri. 2007. OpenFst: A general and efficient weighted finite-state transducer library. In Implementation and Application of Automata, pages 11–23. Springer. Arturo Argueta and David Chiang. 2017. Decoding with finite-state transducers on GPUs. In Proceedings of EACL, pages 1044–1052. Leonard E. Baum, Ted Petrie, George Soules, and Norman Weiss. 1970. A maximization technique occurring in the statistical analysis of probabilistic functions of Markov chains. The Annals of Mathematical Statistics, 41(1):164–171. Octavian Cheng, John Dines, and Mathew Magimai Doss. 2007. A generalized dynamic composition algorithm of weighted finite state transducers for large vocabulary speech recognition. In Acoustics, Speech and Signal Processing, 2007. ICASSP 2007. IEEE International Conference on, volume 4, pages IV– 345. IEEE. Paul R. Dixon, Diamantino A Caseiro, Tasuku Oonishi, and Sadaoki Furui. 2007. The Titech large vocabulary WFST speech recognition system. In Proceedings of the IEEE Workshop on Automatic Speech Recognition & Understanding (ASRU), pages 443– 448. Jonathan Graehl. 1997. Carmel finite-state toolkit. ISI/USC. Alex Graves, Santiago Fern´andez, Faustino Gomez, and J¨urgen Schmidhuber. 2006. Connectionist temporal classification: labelling unsegmented sequence data with recurrent neural networks. In Proceedings of ICML, pages 369–376. Pawan Harish and P. J. Narayanan. 2007. Accelerating large graph algorithms on the GPU using CUDA. In Proceedings of High Performance Computing (HiPC), volume 7, pages 197–208. Takaaki Hori and Atsushi Nakamura. 2005. Generalized fast on-the-fly composition algorithm for WFST-based speech recognition. In Proceedings of INTERSPEECH, pages 557–560. Minyoung Jung, Jinwoo Park, Johann Blieberger, and Bernd Burgstaller. 2017. Parallel construction of simultaneous deterministic finite automata on sharedmemory multicores. In Proceedings of the International Conference on Parallel Processing (ICPP), pages 271–281. Bryan Jurish and Kay-Michael W¨urzner. 2013. Multithreaded composition of finite-state-automata. In Proceedings of FSMNLP, pages 81–89. Richard E. Ladner and Michael J. Fischer. 1980. Parallel prefix computation. Journal of the ACM, 27(4):831–838. John D. Lafferty, Andrew McCallum, and Fernando C. N. Pereira. 2001. Conditional random fields: Probabilistic models for segmenting and labeling sequence data. In Proceedings of ICML, pages 282– 289. Duane Merrill, Michael Garland, and Andrew Grimshaw. 2012. Scalable GPU graph traversal. In Proceedings of the ACM SIGPLAN Symposium on Principles and Practice of Parallel Programming (PPoPP), pages 117–128. Mehryar Mohri. 1997. Finite-state transducers in language and speech processing. Computational Linguistics, 23(2):269–311. Mehryar Mohri. 2009. Weighted automata algorithms. In Handbook of Weighted Automata, pages 213–254. Springer. Mehryar Mohri, Fernando Pereira, and Michael Riley. 2002. Weighted finite-state transducers in speech recognition. Computer Speech & Language, 16(1):69–88. Todd Mytkowicz, Madanlal Musuvathi, and Wolfram Schulte. 2014. Data-parallel finite-state machines. In Proceedings of Architectural Support for Programming Languages and Operating Systems (ASPLOS), pages 529–542. Fernando C. N. Pereira and Michael D. Riley. 1997. Speech recognition by composition of weighted finite automata. In Emmanuel Roche and Yves Schabes, editors, Finite-State Language Processing. MIT Press.
2018
251
Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Long Papers), pages 2706–2716 Melbourne, Australia, July 15 - 20, 2018. c⃝2018 Association for Computational Linguistics 2706 Supervised Treebank Conversion: Data and Approaches Xinzhou Jiang2∗, Bo Zhang2, Zhenghua Li1,2, Min Zhang1,2, Sheng Li3, Luo Si3 1. Institute of Artificial Intelligence, Soochow University, Suzhou, China 2. School of Computer Science and Technology, Soochow University, Suzhou, China {xzjiang, bzhang17}@stu.suda.edu.cn, {zhli13,minzhang}@suda.edu.cn 3. Alibaba Inc., Hangzhou, China {lisheng.ls,luo.si}@alibaba-inc.com Abstract Treebank conversion is a straightforward and effective way to exploit various heterogeneous treebanks for boosting parsing accuracy. However, previous work mainly focuses on unsupervised treebank conversion and makes little progress due to the lack of manually labeled data where each sentence has two syntactic trees complying with two different guidelines at the same time, referred as bi-tree aligned data. In this work, we for the first time propose the task of supervised treebank conversion. First, we manually construct a bi-tree aligned dataset containing over ten thousand sentences. Then, we propose two simple yet effective treebank conversion approaches (pattern embedding and treeLSTM) based on the state-of-the-art deep biaffine parser. Experimental results show that 1) the two approaches achieve comparable conversion accuracy, and 2) treebank conversion is superior to the widely used multi-task learning framework in multiple treebank exploitation and leads to significantly higher parsing accuracy. 1 Introduction During the past few years, neural network based dependency parsing has achieved significant progress and outperformed the traditional discrete-feature based parsing (Chen and Manning, 2014; Dyer et al., 2015; Zhou ∗The first two (student) authors make equal contributions to this work. Zhenghua is the correspondence author. Treebanks #Tok Grammar Sinica (Chen et al., 2003) 0.36M Case grammar CTB (Xue et al., 2005) 1.62M Phrase structure TCT (Zhou, 2004) 1.00M Phrase structure PCT (Zhan, 2012) 0.90M Phrase structure HIT-CDT (Che et al., 2012) 0.90M Dependency structure PKU-CDT (Qiu et al., 2014) 1.40M Dependency structure Table 1: Large-scale Chinese treebanks (token number in million). et al., 2015; Andor et al., 2016). Most remarkably, Dozat and Manning (2017) propose a simple yet effective deep biaffine parser that further advances the state-of-the-art accuracy by large margin. As reported, their parser outperforms the state-of-the-art discrete-feature based parser of Bohnet and Nivre (2012) by 0.97 (93.76% −92.79%) on the English WSJ data and 6.87 (85.38% −78.51%) on the Chinese CoNLL-2009 data, respectively. Kindly note that all these results are obtained by training parsers on a single treebank. Meanwhile, motivated by different syntactic theories and practices, major languages in the world often possess multiple large-scale heterogeneous treebanks, e.g., Tiger (Brants et al., 2002) and TüBa-D/Z (Telljohann et al., 2004) treebanks for German, Talbanken (Einarsson, 1976) and Syntag (Järborg, 1986) treebanks for Swedish, ISST (Montemagni et al., 2003) and TUT1 treebanks for Italian, etc. Table 1 lists several large-scale Chinese treebanks. In this work, we take HIT-CDT as a case study. Our next-step plan is to annotate bi-tree aligned data for PKU-CDT and then convert PKU-CDT to our guideline. For non-dependency treebanks, the straight1http://www.di.unito.it/~tutreeb/ 2707 forward choice is to convert such treebanks to dependency treebanks based on heuristic head-finding rules. The second choice is to directly extend our proposed approaches by adapting the patterns and treeLSTMs for non-dependency structures, which should be straightforward as well. Considering the high cost of treebank construction, it has always been an interesting and attractive research direction to exploit various heterogeneous treebanks for boosting parsing performance. Though under different linguistic theories or annotation guidelines, the treebanks are painstakingly developed to capture the syntactic structures of the same language, thereby having a great deal of common grounds. Previous researchers have proposed two approaches for multi-treebank exploitation. On the one hand, the guiding-feature method projects the knowledge of the source-side treebank into the target-side treebank, and utilizes extra pattern-based features as guidance for the target-side parsing, mainly for the traditional discrete-feature based parsing (Li et al., 2012). On the other hand, the multi-task learning method simultaneously trains two parsers on two treebanks and uses shared neural network parameters for representing common-ground syntactic knowledge (Guo et al., 2016).2 Regardless of their effectiveness, while the guiding-feature method fails to directly use the source-side treebank as extra training data, the multi-task learning method is incapable of explicitly capturing the structural correspondences between two guidelines. In this sense, we consider both of them as indirect exploitation approaches. Compared with the indirect approaches, treebank conversion aims to directly convert a source-side treebank into the target-side guideline, and uses the converted treebank as extra labeled data for training the targetside model. Taking the example in Figure 1, the goal of this work is to convert the under tree that follows the HIT-CDT guideline (Che et al., 2012) into the upper one that follows our new guideline. However, due to the lack 2 Johansson (2013) applies the feature-sharing approach of Daumé III (2007) for multiple treebank exploitation, which can be regarded as a simple discrete-feature variant of multi-task learning. $ 奶奶 叫 我 快 上学 Grandma asks me quickly go to school subj root adv obj pred HED SBV ADV VOB DBL Figure 1: Example of treebank conversion from the source-side HIT-CDT tree (under) to the target-side our-CDT tree (upper). of bi-tree aligned data, in which each sentence has two syntactic trees following the sourceside and target-side guidelines respectively, most previous studies are based on unsupervised treebank conversion (Niu et al., 2009) or pseudo bi-tree aligned data (Zhu et al., 2011; Li et al., 2013), making very limited progress. In this work, we for the first time propose the task of supervised treebank conversion. The key motivation is to better utilize a largescale source-side treebank by constructing a small-scale bi-tree aligned data. In summary, we make the following contributions. (1) We have manually annotated a highquality bi-tree aligned data containing over ten thousand sentences, by reannotating the HIT-CDT treebank according to a new guideline. (2) We propose a pattern embedding conversion approach by retrofitting the indirect guiding-feature method of Li et al. (2012) to the direct conversion scenario, with several substantial extensions. (3) We propose a treeLSTM conversion approach that encodes the source-side tree at a deeper level than the shallow pattern embedding approach. Experimental results show that 1) the two conversion approaches achieve nearly the same conversion accuracy, and 2) direct treebank conversion is superior to indirect multi-task learning in exploiting multiple treebanks in methodology simplicity and performance, yet with the cost of manual annotation. We release the annotation guideline and the newly 2708 annotated data in http://hlt.suda.edu.cn/ index.php/SUCDT. 2 Annotation of Bi-tree Aligned Data The key issue for treebank conversion is that sentences in the source-side and target-side treebanks are non-overlapping. In other words, there lacks a bi-tree aligned data in which each sentence has two syntactic trees complying with two guidelines as shown in Figure 1. Consequently, we cannot train a supervised conversion model to directly learn the structural correspondences between the two guidelines. To overcome this obstacle, we construct a bi-tree aligned data of over ten thousand sentences by re-annotating the publicly available dependency-structure HITCDT treebank according to a new annotation guideline. 2.1 Data Annotation Annotation guideline. Unlike phrasestructure treebank construction with very detailed and systematic guidelines (Xue et al., 2005; Zhou, 2004), previous works on Chinese dependency-structure annotation only briefly describe each relation label with a few concrete examples. For example, the HIT-CDT guideline contains 14 relation labels and illustrates them in a 14-page document. The UD (universal dependencies) project3 releases a more detailed language-generic guideline to facilitate cross-linguistically consistent annotation, containing 37 relation labels. However, after in-depth study, we find that the UD guideline is very useful and comprehensive, but may not be completely compact for realistic annotation of Chinese-specific syntax. After many months’ investigation and trial, we have developed a systematic and detailed annotation guideline for Chinese dependency treebank construction. Our 60-page guideline employs 20 relation labels and gives detailed illustrations for annotation, in order to improve consistency and quality. Please refer to Guo et al. (2018) for the details of our guideline, including detailed discussions on the correspondences and differences between the UD guideline and ours. 3http://universaldependencies.org Partial annotation. To save annotation effort, we adopt the idea of Li et al. (2016) and only annotate the most uncertain (difficult) words in a sentence. For simplicity, we directly use their released parser and produce the uncertainty results of all HLT-CDT sentences via two-fold jack-knifing. First, we select 2, 000 most difficult sentences of lengths [5, 10] for full annotation4. Then, we select 3, 000 most difficult sentences of lengths [10, 20] from the remaining data for 50% annotation. Finally, we select 6, 000 most difficult sentences of lengths [5, 25] for 20% annotation from the remaining data. The difficulty of a sentence is computed as the averaged difficulty of its selected words. Annotation platform. To guarantee annotation consistency and data quality, we build an online annotation platform to support strict double annotation and subsequent inconsistency handling. Each sentence is distributed to two random annotators. If the two submissions are not the same (inconsistent dependency or relation label), a third expert annotator will compare them and decide a single answer. Annotation process. We employ about 20 students in our university as part-time annotators. Before real annotation, we first give a detailed talk on the guideline for about two hours. Then, the annotators spend several days on systematically studying our guideline. Finally, they are required to annotate 50 testing sentences on the platform. If the submission is different from the correct answer, the annotator receives an instant feedback for selfimprovement. Based on their performance, about 10 capable annotators are chosen as experts to deal with inconsistent submissions. 2.2 Statistics and Analysis Consistency statistics. Compared with the final answers, the overall accuracy of all annotators is 87.6%. Although the overall inter-annotator dependency-wise consistency rate is 76.5%, the sentence-wise consistency rate is only 43.7%. In other words, 56.3% (100 −43.7) sentences are further checked by a third expert annotator. This shows how 4 Punctuation marks are ruled out and unannotated. 2709 difficult it is to annotate syntactic structures and how important it is to employ strict double annotation to guarantee data quality. Annotation time analysis. As shown in Table 2, the averaged sentence length is 15.4 words in our annotated data, among which 4.7 words (30%) are partially annotated with their heads. According to the records of our annotation platform, each sentence requires about 3 minutes in average, including the annotation time spent by two annotators and a possible expert. The total cost of our data annotation is about 550 person-hours, which can be completed by 20 full-time annotators within 4 days. The most cost is spent on quality control via two-independent annotation and inconsistency handling by experts. This is in order to obtain very high-quality data. The cost is reduced to about 150 personhours without such strict quality control. Heterogeneity analysis. In order to understand the heterogeneity between our guideline and the HIT-CDT guideline, we analyze the 36, 348 words with both-side heads in the train data, as shown in Table 2. The consistency ratio of the two guidelines is 81.69% (UAS), without considering relation labels. By mapping each relation label in HIT-CDT (14 in total) to a single label of our guideline (20 in total), the maximum consistency ratio is 73.79% (LAS). The statistics are similar for the dev/test data. 3 Indirect Multi-task Learning Basic parser. In this work, we build all the approaches over the state-of-the-art deep biaffine parser proposed by Dozat and Manning (2017). As a graph-based dependency parser, it employs a deep biaffine neural network to compute the scores of all dependencies, and uses viterbi decoding to find the highestscoring tree. Figure 2 shows how to score a dependency i ←j.5 First, the biaffine parser applies multi-layer bidirectional sequential LSTMs (biSeqLSTM) to encode the input sentence. The word/tag embeddings ewk and etk are concatenated as the input vector at wk. 5 The score computation of the relation labels is analogous, but due to space limitation, we refer readers to Dozat and Manning (2017) for more details. Then, the output vector of the top-layer biSeqLSTM at wk, denoted as hseq k , is fed into two separate MLPs to get two lowerdimensional representation vectors. rH k = MLPH ( hseq k ) rD k = MLPD ( hseq k ) (1) where rH k is the representation vector of wk as a head word, and rD k as a dependent. Finally, the score of the dependency i ←j is computed via a biaffine operation. score(i ←j) = [ rD i 1 ]T WbrH j (2) During training, the original biaffine parser uses the local softmax loss. For each wi and its head wj, its loss is defined as −log escore(i←j) ∑ k escore(i←k) . Since our training data is partially annotated, we follow Li et al. (2016) and employ the global CRF loss (Ma and Hovy, 2017) for better utilization of the data, leading to consistent accuracy gain. Multi-task learning aims to incorporate labeled data of multiple related tasks for improving performance (Collobert and Weston, 2008). Guo et al. (2016) apply multi-task learning to multi-treebank exploitation based on the neural transition-based parser of Dyer et al. (2015), and achieve higher improvement than the guiding-feature approach of Li et al. (2012). Based on the state-of-the-art biaffine parser, this work makes a straightforward extension to realize multi-task learning. We treat the source-side and target-side parsing as two individual tasks. The two tasks use shared parameters for word/tag embeddings and multilayer biSeqLSTMs to learn common-ground syntactic knowledge, use separate parameters for the MLP and biaffine layers to learn taskspecific information. 4 Direct Treebank Conversion Task definition. As shown in Figure 1, given an input sentence x, treebank conversion aims to convert the under source-side tree dsrc to the upper target-side tree dtgt. Therefore, the main challenge is how to make full use of the given dsrc to guide the construction 2710 of dtgt. Specifically, under the biaffine parser framework, the key is to utilize dsrc as guidance for better scoring an arbitrary target-side dependency i ←−j. In this paper, we try to encode the structural information of i and j in dsrc as a dense vector from two representation levels, thus leading to two approaches, i.e., the shallow pattern embedding approach and the deep treeLSTM approach. The dense vectors are then used as extra inputs of the MLP layer to obtain better word representations, as shown in Figure 2. 4.1 The Pattern Embedding Approach In this subsection, we propose the pattern embedding conversion approach by retrofitting the indirect guiding-feature method of Li et al. (2012) to the direct conversion scenario, with several substantial extensions. The basic idea of Li et al. (2012) is to use extra guiding features produced by the sourceside parser. First, they train the source parser Parsersrc on the source-side treebank. Then, they use Parsersrc to parse the target-side treebank, leading to pseudo bi-tree aligned data. Finally, they use the predictions of Parsersrc as extra pattern-based guiding features and build a better target-side parser Parsertgt. The original method of Li et al. (2012) is proposed for traditional discrete-feature based parsing, and does not consider the relation labels in dsrc. In this work, we make a few useful extensions for more effective utilization of dsrc. • We further subdivide their “else” pattern into four cases according to the length of the path from wi to wj in dsrc. The left part of Figure 2 shows all 9 patterns. • We use the labels of wi and wj in dsrc, denoted as li and lj. • Inspired by the treeLSTM approach, we also consider the label of wa, the lowest common ancestor (LCA) of wi and wj, denoted as la. Our pattern embedding approach works as follows. Given i ←j, we first decide its pattern type according to the structural relationship between wi and wj in dsrc, denoted as pi←j. For example, if wi and wj are both the children of a third word wk in dsrc, then pi←j = “sibling”. Figure 2 shows all 9 patterns. Then, we embed pi←j into a dense vector epi←j through a lookup operation in order to fit into the biaffine parser. Similarly, the three labels are also embedded into three dense vectors, i.e., eli, elj, ela. The four embeddings are combined as rpat i←j to represent the structural information of wi and wj in dsrc. rpat i←j = epi←j ⊕eli ⊕elj ⊕ela (3) Finally, the representation vector rpat i←j and the top-layer biSeqLSTM outputs are concatenated as the inputs of the MLP layer. rD i,i←j = MLPD( rseq i ⊕rpat i←j ) rH j,i←j = MLPH( rseq j ⊕rpat i←j ) (4) Through rpat i←j, the extended word representations, i.e., rD i,i←j and rH j,i←j, now contain the structural information of wi and wj in dsrc. The remaining parts of the biaffine parser is unchanged. The extended rD i,i←j and rH j,i←j are fed into the biaffine layer to compute a more reliable score of the dependency i ←j, with the help of the guidance of dsrc. 4.2 The TreeLSTM Approach Compared with the pattern embedding approach, our second conversion approach employs treeLSTM to obtain a deeper representation of i ←j in the source-side tree dsrc. Tai et al. (2015) first propose treeLSTM as a generalization of seqLSTM for encoding treestructured inputs, and show that treeLSTM is more effective than seqLSTM on the semantic relatedness and sentiment classification tasks. Miwa and Bansal (2016) compare three treeLSTM variants on the relation extraction task and show that the SP-tree (shortest path) treeLSTM is superior to the full-tree and subtree treeLSTMs. In this work, we employ the SP-tree treeLSTM of Miwa and Bansal (2016) for our treebank conversion task. Our preliminary experiments also show the SP-tree treeLSTM outperforms the full-tree treeLSTM, which is consistent with Miwa and Bansal. We did not implement the in-between subtree treeLSTM. 2711 ... ... ... ... ... ... BiSeqLSTM (two layers) MLPD MLPH hseq j hseq i rD i,i←j rH j,i←j Biaffine score(i ←j) consistent: i ←j grand: i ←k ←j sibling: i ←k →j reverse: i →j reverse grand: i →k →j else: {3; 4 −5; 6; ≥7} epi←j rpat i←j eli ⊕elj ⊕ela wa wi wj rtree i←j h↓ i h↓ j h↑ a Figure 2: Computation of score(i ←j) in our proposed conversion approaches. Without the source-side tree dsrc, the baseline uses the basic rD i and rH j (instead of rD i,i←j and rH j,i←j). Given wi and wj and their LCA wa, the SPtree is composed of two paths, i.e., the path from wa to wi and the path from wa to wj, as shown in the right part of Figure 2. Different from the shallow pattern embedding approach, the treeLSTM approach runs a bidirectional treeLSTM through the SP-tree, in order to encode the structural information of wi and wj in dsrc at a deeper level. The topdown treeLSTM starts from wa and accumulates information until wi and wj, whereas the bottom-up treeLSTM propagates information in the opposite direction. Following Miwa and Bansal (2016), we stack our treeLSTM on top of the biSeqLSTM layer of the basic biaffine parser, instead of directly using word/tag embeddings as inputs. For example, the input vector for wk in the treeLSTM is xk = hseq k ⊕elk, where hseq k is the toplevel biSeqLSTM output vector at wk, and lk is the label between wk and its head word in dsrc, and elk is the label embedding. In the bottom-up treeLSTM, an LSTM node computes a hidden vector based on the combination of the input vector and the hidden vectors of its children in the SP-tree. The right part of Figure 2 and Eq. (5) illustrate the computation at wa. ˜ha = ∑ k∈C(a) hk ia = σ ( U(i)xa + V(i)˜ha + b(i)) fa,k = σ ( U(f)xa + V(f)hk + b(f)) oa = σ ( U(o)xa + V(o)˜ha + b(o)) ua = tanh ( U(u)xa + V(u)˜ha + b(u)) ca = ia ⊙ua + ∑ k∈C(a) fa,k ⊙ck ha = oa ⊙tanh ( ca ) (5) where C(a) means the children of wa in the SP-tree, and fa,k is the forget vector for wa’s child wk. The top-down treeLSTM sends information from the root wa to the leaves wi and wj. An LSTM node computes a hidden vector based on the combination of its input vector and the hidden vector of its single preceding (father) node in the SP-tree. After performing the biTreeLSTM, we follow Miwa and Bansal (2016) and use the combination of three output vectors to represent the structural information of wi and wj in dsrc, i.e., the output vectors of wi and wj in the topdown treeLSTM, and the output vector of wa 2712 #Sent #Tok (HIT) #Tok (our) train 7,768 119,707 36,348 dev 998 14,863 4,839 test 1,995 29,975 9,679 train-HIT 52,450 980,791 36,348 Table 2: Data statistics. Kindly note that sentences in train are also in train-HIT. in the bottom-up treeLSTM. rtree i←j = h↓ i ⊕h↓ j ⊕h↑ a (6) Similar to Eq. (4) for the pattern embedding approach, we concatenate rtree i←j with the output vectors of the top-layer biSeqLSTM, and feed them into MLPH/D. 5 Experiments 5.1 Experiment Settings Data. We randomly select 1, 000/2, 000 sentences from our newly annotated data as the dev/test datasets, and the remaining as train. Table 2 shows the data statistics after removing some broken sentences (ungrammatical or wrongly segmented) discovered during annotation. The “#tok (our)” column shows the number of tokens annotated according to our guideline. Train-HIT contains all sentences in HIT-CDT except those in dev/test, among which most sentences only have the HIT-CDT annotations. Evaluation. We use the standard labeled attachment score (LAS, UAS for unlabeled) to measure the parsing and conversion accuracy. Implementation. In order to more flexibly realize our ideas, we re-implement the baseline biaffine parser in C++ based on the lightweight neural network library of Zhang et al. (2016). On the Chinese CoNLL-2009 data, our parser achieves 85.80% in LAS, whereas the original tensorflow-based parser6 achieves 85.54% (85.38% reported in their paper) under the same parameter settings and external word embedding. Hyper-parameters. We follow most parameter settings of Dozat and Manning (2017). The external word embedding dictionary is trained on Chinese Gigaword (LDC2003T09) with GloVe (Pennington et al., 2014). For 6https://github.com/tdozat/Parser-v1 Training data UAS LAS Multi-task train & train-HIT 79.29 74.51 Pattern train 86.66 82.03 TreeLSTM train 86.69 82.09 Combined train 86.66 81.82 Table 3: Conversion accuracy on test data. efficiency, we use two biSeqLSTM layers instead of three, and reduce the biSeqLSTM output dimension (300) and the MLP output dimension (200). For the conversion approaches, the sourceside pattern/label embedding dimensions are 50 (thus |rpat i←j| = 200), and the treeLSTM output dimension is 100 (thus |rtree i←j| = 300). During training, we use 200 sentences as a data batch, and evaluate the model on the dev data every 50 batches (as an epoch). Training stops after the peak LAS on dev does not increase in 50 consecutive epochs. For the multi-task learning approach, we randomly sample 100 train sentences and 100 train-HIT sentences to compose a data batch, for the purpose of corpus weighting. To fully utilize train-HIT for the conversion task, the conversion models are built upon multi-task learning, and directly reuse the embeddings and biSeqLSTMs of the multitask trained model without fine-tuning. 5.2 Results: Treebank Conversion Table 3 shows the conversion accuracy on the test data. As a strong baseline for the conversion task, the multi-task trained target-side parser (“multi-task”) does not use dsrc during both training and evaluation. In contrast, the conversion approaches use both the sentence x and dsrc as inputs. Compared with “multi-task”, the two proposed conversion approaches achieve nearly the same accuracy, and are able to dramatically improve the accuracy with the extra guidance of dsrc. The gain is 7.58 (82.09 − 74.51) in LAS for the treeLSTM approach. It is straightforward to combine the two conversion approaches. We simply concatenate hseq i/j with both rpat i←j and rtree i←j before feeding into MLPH/D. However, the “combined” model leads to no further improvement. This indicates that although the two approaches try 2713 on dev on test UAS LAS UAS LAS Pattern (full) 86.73 81.93 86.66 82.03 w/o distance 86.73 81.75 86.57 81.94 w/o li 86.47 80.55 86.47 81.15 w/o lj 86.55 81.69 86.45 81.76 w/o la 86.24 81.66 86.17 81.51 w/o labels 86.05 79.78 85.93 80.08 TreeLSTM (full) 86.73 81.95 86.69 82.09 w/o labels 86.55 80.32 86.20 80.56 Table 4: Feature ablation for the conversion approaches. to encode the structural information of wi and wj in dsrc from different perspectives, the resulted representations are actually overlapping instead of complementary, which is contrary to our intuition that the treeLSTM approach should give better and deeper representations than the shallow pattern embedding approach. We have also tried several straightforward modifications to the standard treeLSTM in Eq. (5), but found no further improvement. We leave further exploration of better treeLSTMs and model combination approaches as future work. Feature ablation results are presented in Table 4 to gain more insights on the two proposed conversion approaches. In each experiment, we remove a single component from the full model to learn its individual contribution. For the pattern embedding approach, all proposed extensions to the basic pattern-based approach of Li et al. (2012) are useful. Among the three labels, the embedding of li is the most useful and its removal leads to the highest LAS drop of 0.88 (82.03 −81.15). This is reasonable considering that 81.69% dependencies are consistent in the two guidelines, as discussed in the heterogeneity analysis of Section 2.2. Removing all three labels decreases UAS by 0.73 (86.66−85.93) and LAS by 1.95 (82.03 −80.08), demonstrating that the source-side labels are highly correlative with the target-side labels, and therefore very helpful for improving LAS. For the treeLSTM approach, the source-side labels in dsrc are also very useful, improving UAS by 0.49 (86.69 −86.20) and LAS by 1.53 (82.09 −80.56). 5.3 Results: Utilizing Converted Data Another important question to be answered is whether treebank conversion can lead to higher parsing accuracy than multi-task learning. In terms of model simplicity, treebank conversion is better because eventually the target-side parser is trained directly on an enlarged homogeneous treebank unlike the multi-task learning approach that needs to simultaneously train two parsers on two heterogeneous treebanks. Table 5 shows the empirical results. Please kindly note that the parsing accuracy looks very low, because the test data is partially annotated and only about 30% most uncertain (difficult) words are manually labeled with their heads according to our guideline, as discussed in Section 2.1. The first-row, “single” is the baseline targetside parser trained on the train data. The second-row “single (hetero)” refers to the source-side heterogeneous parser trained on train-HIT and evaluated on the target-side test data. Since the similarity between the two guidelines is high, as discussed in Section 2.2, the source-side parser achieves even higher UAS by 0.21 (76.20 −75.99) than the baseline target-side parser trained on the small-scale train data. The LAS is obtained by mapping the HIT-CDT labels to ours (Section 2.2). In the third row, “multi-task” is the targetside parser trained on train & train-HIT with the multi-task learning approach. It significantly outperforms the baseline parser by 4.30 (74.51 −70.21) in LAS. This shows that the multi-task learning approach can effectively utilize the large-scale train-HIT to help the target-side parsing. In the fourth row, “single (large)” is the basic parser trained on the large-scale converted train-HIT (homogeneous). We employ the treeLSTM approach to convert all sentences in train-HIT into our guideline.7 We can see that 7 For each sentence in train, which is already partially annotated, the conversion model actually completes the partial target-side tree into a full tree via constrained decoding. As shown by the results in Li et al. (2016), since the most difficult dependencies are known and given to the model, the parsing accuracy will be much higher than the traditional parsing without constraints. 2714 Training data UAS LAS Single train 75.99 70.95 Single (hetero) train-HIT 76.20 68.43 Multi-task train & train-HIT 79.29 74.51 Single (large) converted train-HIT 80.45 75.83 Table 5: Parsing accuracy on test data. LAS difference between any two systems is statistically significant (p < 0.005) according to Dan Bikel’s randomized parsing evaluation comparer for significance test Noreen (1989). Task Training data UAS LAS Conversion train 93.42 90.49 Parsing (baseline) train 89.66 86.41 Parsing (ours) converted train-HIT 91.16 88.07 Table 6: Results on the fully annotated 372 sentences of the test data. the single parser trained on the converted data significantly outperforms the parser in the multi-task learning approach by 1.32 (75.83 − 74.51) in LAS. In summary, we can conclude that treebank conversion is superior to multi-task learning in multi-treebank exploitation for its simplicity and better performance. 5.4 Results on fully annotated data We randomly divided the newly annotated data into train/dev/test, so the test set has a mix of 100%, 50% and 20% annotated sentences. To gain a rough estimation of the performance of different approaches on fully annotated data, we give the results in Table 6. We can see that all the models achieve much higher accuracy on the portion of fully annotated data than on the whole test data as shown in Table 3 and 5, since the dependencies to be evaluated are the most difficult ones in a sentence for the portion of partially annotated data. Moreover, the conversion model can achieve over 90% LAS thanks to the guidance of the source-side HIT-CDT tree. Please also note that there would still be a slight bias, because those fully annotated sentences are chosen as the most difficult ones according to the parsing model but are also very short ([5, 10]). 6 Conclusions and Future Work In this work, we for the first time propose the task of supervised treebank conversion by constructing a bi-tree aligned data of over ten thousand sentences. We design two simple yet effective conversion approaches based on the state-of-the-art deep biaffine parser. Results show that 1) the two approaches achieves nearly the same conversion accuracy; 2) relation labels in the source-side tree are very helpful for both approaches; 3) treebank conversion is more effective in multi-treebank exploitation than multi-task learning, and achieves significantly higher parsing accuracy. In future, we would like to advance this work in two directions: 1) proposing more effective conversion approaches, especially by exploring the potential of treeLSTMs; 2) constructing bi-tree aligned data for other treebanks and exploiting all available single-tree and bi-tree labeled data for better conversion. Acknowledgments The authors would like to thank the anonymous reviewers for the helpful comments. We are greatly grateful to all participants in data annotation for their hard work. We also thank Guodong Zhou and Wenliang Chen for the helpful discussions, and Meishan Zhang for his help on the re-implementation of the Biaffine Parser. This work was supported by National Natural Science Foundation of China (Grant No. 61525205, 61502325 61432013), and was also partially supported by the joint research project of Alibaba and Soochow University. References Daniel Andor, Chris Alberti, David Weiss, Aliaksei Severyn, Alessandro Presta, Kuzman Ganchev, Slav Petrov, and Michael Collins. 2016. Globally normalized transition-based neural networks. In Proceedings of ACL, pages 2442–2452. Bernd Bohnet and Joakim Nivre. 2012. A transition-based system for joint part-of-speech tagging and labeled non-projective dependency parsing. In Proceedings of EMNLP 2012, pages 1455–1465. Sabine Brants, Stefanie Dipper, Silvia Hansen, Wolfgang Lezius, and George Smith. 2002. The TIGER treebank. In Proceedings of the Workshop on Treebanks and Linguistic Theory, pages 24–41. 2715 Wanxiang Che, Zhenghua Li, and Ting Liu. 2012. Chinese Dependency Treebank 1.0 (LDC2012T05). In Philadelphia: Linguistic Data Consortium. Danqi Chen and Christopher Manning. 2014. A fast and accurate dependency parser using neural networks. In Proceedings of EMNLP, pages 740–750. Keh-Jiann Chen, Chi-Ching Luo, Ming-Chung Chang, Feng-Yi Chen, Chao-Jan Chen, ChuRen Huang, and Zhao-Ming Gao. 2003. Sinica treebank: Design criteria,representational issues and implementation, chapter 13. Kluwer Academic Publishers. Ronan Collobert and Jason Weston. 2008. A unified architecture for natural language processing: Deep neural networks with multitask learning. In Proceedings of ICML. Hal Daumé III. 2007. Frustratingly easy domain adaptation. In Proceedings of ACL, pages 256– 263. Timothy Dozat and Christopher D. Manning. 2017. Deep biaffine attention for neural dependecy parsing. In Proceedings of ICLR. Chris Dyer, Miguel Ballesteros, Wang Ling, Austin Matthews, and Noah A. Smith. 2015. Transition-based dependency parsing with stack long short-term memory. In Proceedings of ACL, pages 334–343. Jan Einarsson. 1976. Talbankens skriftspråkskonkordans. Department of Scandinavian Languages, Lund University. Jiang Guo, Wanxiang Che, Haifeng Wang, and Ting Liu. 2016. A universal framework for inductive transfer parsing across multi-typed treebanks. In Proceedings of COLING, pages 12–22. Lijuan Guo, Zhenghua Li, Xue Peng, and Min Zhang. 2018. Data annotation guideline of chinese dependency syntax for multi-domain and multi-source texts. Journal of Chinese Information Processing. Jerker Järborg. 1986. Manual for syntaggning. Department of Linguistic Computation, University of Gothenburg. Richard Johansson. 2013. Training parsers on incompatible treebanks. In Proceedings of NAACL, pages 127–137. Xiang Li, Wenbin Jiang, Yajuan Lü, and Qun Liu. 2013. Iterative transformation of annotation guidelines for constituency parsing. In Proceedings of ACL, pages 591–596. Zhenghua Li, Wanxiang Che, and Ting Liu. 2012. Exploiting multiple treebanks for parsing with quasisynchronous grammar. In Proceedings of ACL, pages 675–684. Zhenghua Li, Min Zhang, Yue Zhang, Zhanyi Liu, Wenliang Chen, Hua Wu, and Haifeng Wang. 2016. Active learning for dependency parsing with partial annotation. In Proceedings of ACL. Xuezhe Ma and Eduard Hovy. 2017. Neural probabilistic model for non-projective mst parsing. In Proceedings of IJCNLP, pages 59– 69. Makoto Miwa and Mohit Bansal. 2016. End-to-end relation extraction using lstms on sequences and tree structures. In Proceedings of ACL, pages 1105–1116. Simonetta Montemagni, Francesco Barsotti, Marco Battista, Nicoletta Calzolari, Ornella Corazzari, Alessandro Lenci, Antonio Zampolli, Francesca Fanciulli, Maria Massetani, Remo Raffaelli, Roberto Basili, Maria Teresa Pazienza, Dario Saracino, Fabio Zanzotto, Nadia Mana, Fabio Pianesi, and Rodolfo Delmonte. 2003. Building the italian syntactic– semantic treebank. In Anne Abeille, editor, Building and Using Syntactically Annotated Corpora. Kluwer, Dordrecht. Zheng-Yu Niu, Haifeng Wang, and Hua Wu. 2009. Exploiting heterogeneous treebanks for parsing. In Proceedings of ACL, pages 46–54. Eric W. Noreen. 1989. Computer-intensive methods for testing hypotheses: An introduction. John Wiley & Sons, Inc., New York. Jeffrey Pennington, Richard Socher, and Christopher D. Manning. 2014. Glove: Global vectors for word representation. In Proceedings of EMNLP, pages 1532–1543. Likun Qiu, Yue Zhang, Peng Jin, and Houfeng Wang. 2014. Multi-view chinese treebanking. In Proceedings of COLING, pages 257–268. Kai Sheng Tai, Richard Socher, and Christopher D. Manning. 2015. Improved semantic representations from tree-structured long shortterm memory networks. In Proceedings of ACL, pages 1556–1566. Heike Telljohann, Erhard Hinrichs, and Sandra Kbler. 2004. The Tüba-D/Z treebank: Annotating German with a context-free backbone. In Proceedings of the Fourth International Conference on Language Resources and Evaluation (LREC), pages 2229–2235. Nianwen Xue, Fei Xia, Fu-Dong Chiou, and Martha Palmer. 2005. The Penn Chinese Treebank: Phrase structure annotation of a large corpus. In Natural Language Engineering, volume 11, pages 207–238. 2716 Weidong Zhan. 2012. The application of treebank to assist Chinese grammar instruction: a preliminary investigation. Journal of Technology and Chinese Language Teaching, 3(2):16–29. Meishan Zhang, Jie Yang, Zhiyang Teng, and Yue Zhang. 2016. Libn3l:a lightweight package for neural nlp. In Proceedings of LREC, pages 225– 229. Hao Zhou, Yue Zhang, Shujian Huang, and Jiajun Chen. 2015. A neural probabilistic structured-prediction model for transition-based dependency parsing. In Proceedings of ACL, pages 1213–1222. Qiang Zhou. 2004. Annotation scheme for Chinese treebank. Journal of Chinese Information Processing, 18(4):1–8. Muhua Zhu, Jingbo Zhu, and Minghan Hu. 2011. Better automatic treebank conversion using a feature-based approach. In Proceedings of ACL, pages 715–719.
2018
252
Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Long Papers), pages 2717–2726 Melbourne, Australia, July 15 - 20, 2018. c⃝2018 Association for Computational Linguistics 2717 Object-oriented Neural Programming (OONP) for Document Understanding Zhengdong Lu1 Xianggen Liu2,3,4,∗Haotian Cui2,3,4,∗ [email protected], {liuxg16,cht15,yanyk13}@mails.tsinghua.edu.cn, [email protected] 1 DeeplyCurious.ai 2 Department of Biomedical Engineering, School of Medicine, Tsinghua University 3 Beijing Innovation Center for Future Chip, Tsinghua University 4 Laboratory for Brain and Intelligence, Tsinghua University Yukun Yan2,3,4,∗ Daqi Zheng1 Abstract We propose Object-oriented Neural Programming (OONP), a framework for semantically parsing documents in specific domains. Basically, OONP reads a document and parses it into a predesigned object-oriented data structure that reflects the domain-specific semantics of the document. An OONP parser models semantic parsing as a decision process: a neural netbased Reader sequentially goes through the document, and builds and updates an intermediate ontology during the process to summarize its partial understanding of the text. OONP supports a big variety of forms (both symbolic and differentiable) for representing the state and the document, and a rich family of operations to compose the representation. An OONP parser can be trained with supervision of different forms and strength, including supervised learning (SL) , reinforcement learning (RL) and hybrid of the two. Our experiments on both synthetic and real-world document parsing tasks have shown that OONP can learn to handle fairly complicated ontology with training data of modest sizes. 1 Introduction Mapping a document into a structured “machine readable” form is a canonical and probably the most effective way for document understanding. There are quite some recent efforts on designing neural net-based learning machines for this purpose, which can be roughly categorized into two groups: 1) sequence-to-sequence model with the * The work was done when these authors worked as interns at DeeplyCurious.ai. Figure 1: Illustration of OONP on a parsing task. neural net as the black box (Liang et al., 2017), and 2) neural net as a component in a pre-designed statistical model (Zeng et al., 2014). Both categories are hindered in tackling document with complicated structures, by either the lack of effective representation of knowledge or the flexibility in fusing them in the model. Towards solving this problem, we proposed Object-oriented Neural Programming (OONP), a framework for semantically parsing in-domain documents (illustrated in Figure 1). OONP maintains an object-oriented data structure, where objects from different classes are to represent entities (people, events, items etc) which are connected through links with varying types. Each object encapsulates internal properties (both symbolic and differentiable), allowing both neural and symbolic reasoning over complex structures and hence making it possible to represent rich semantics of documents. An OONP parser is neural net-based, but it has sophisticated architecture and mechanism designed for taking and yielding discrete structures, hence nicely combining symbolism (for interpretability and formal reasoning) and connectionism (for flexibility and learnability). For parsing, OONP reads a document and parses it into this object-oriented data structure through a series of discrete actions along reading the document sequentially. OONP supports a rich fam2718 ily of operations for composing the ontology, and flexible hybrid forms for knowledge representation. An OONP parser can be trained with supervised learning (SL), reinforcement learning (RL) and hybrid of the two. OONP in a nutshell The key properties of OONP can be summarized as follows 1. OONP models parsing as a decision process: as the “reading and comprehension” agent goes through the text it gradually forms the ontology as the representation of the text through its action; 2. OONP uses a symbolic memory with graph structure as part of the state of the parsing process. This memory will be created and updated through the sequential actions of the decision process, and will be used as the semantic representation of the text at the end 3. OONP can blend supervised learning (SL) and reinforcement learning (RL) in tuning its parameters to suit the supervision signal in different forms and strength. 2 Related Works 2.1 Semantic Parsing Semantic parsing is concerned with translating language utterances into executable logical forms and plays a key role in building conversational interfaces (Jonathan and Percy, 2014). Different from common tasks of semantic parsings, such as parsing the sentence to dependency structure (Buys and Blunsom, 2017) and executable commands (Herzig and Berant, 2017), OONP parses documents into a predesigned objectoriented data structure which is easily readable for both human and machine. It is related to semantic web (Berners-Lee et al., 2001) as well as frame semantics (Charles J, 1982) in the way semantics is represented, so in a sense, OONP can be viewed as a neural-symbolic implementation of semantic parsing with similar semantic representation. 2.2 State Tracking OONP is inspired by Daum´e III et al. (2009) on modeling parsing as a decision process, and the work on state-tracking models in dialogue system (Henderson et al., 2014) for the mixture of symbolic and probabilistic representations of dialogue state. For modeling a document with entities, Yang et al. (2017) use coreference links to recover entity clusters, though they Figure 2: The overall diagram of OONP, where S stands for symbolic representation, D for distributed representation, and S+D for a hybrid of symbolic and distributed parts. only model entity mentions as containing a single word. However, entities whose names consist of multiple words are not considered. Entity Networks (Henaff et al., 2017) and EntityNLM (Ji et al., 2017) have addressed above problem and are the pioneers to model on tracking entities, but they have not considered the properties of the entities. In fact, explicitly modeling the entities both with their properties and contents is important to understand a document, especially a complex document. For example, if there are two persons named ‘Avery’, it is vital to know their genders or last names to avoid confusion. Therefore, we propose OONP to sketch objects and their relationships by building a structured graph for document parsing. 3 OONP: Overview An OONP parser ( illustrated in Figure 2) consists of a Reader equipped with read/write heads, Inline Memory that represents the document, and Carry-on Memory that summarizes the current understanding of the document at each time step. For each document to parse, OONP first preprocesses it and puts it into the Inline Memory, and then Reader controls the read-heads to sequentially go through the Inline Memory and at the same time update the Carry-on Memory. We will give a more detailed description of the major components below. 3.1 Memory we have two types of memory, Carry-on Memory and Inline Memory. Carry-on Memory is designed to save the state in the decision process and summarize current understanding of the document based on the text that has been “read”, while Inline Memory is designed to save location-specific information about the document. In a sense, the information in Inline Memory is low-level and unstructured, waiting for Reader to fuse and integrate into more structured representation. Carry-on Memory has three compartments: 2719 • Object Memory: denoted Mobj, the objectoriented data structure constructed during the parsing process; • Matrix Memory: denoted Mmat, a matrixtype memory with fixed size, for differentiable read/write by the controlling neural net (Graves et al., 2014). In the simplest case, it could be just a vector as the hidden state of conventional RNN; • Action History: symbolic memory to save the entire history of actions made during the parsing process. Intuitively, Object Memory stores the extracted knowledge of the document with defined structure and strong evidence, while Matrix Memory keeps the knowledge that is fuzzy, uncertain or incomplete, waiting for further information to confirm, complete or clarify. Object Memory Object Memory stores an object-oriented representation of document, as illustrated in Figure 3. Each object is an instance of a particular class∗, which specifies the innate structure of the object, including internal properties, operations, and how this object can be connected with others. The internal properties can be of different types, for example string or category, which usually correspond to different actions in specifying them: the stringtype property is usually “copied” from the original text in Inline Memory, while the category properties need to be rendered by a classifier. The links are in general directional and typed, resembling a special property viewing from the “source object”. In Figure 3, there are six “linked” objects of three classes (namely, PERSON, EVENT, and ITEM) . Taking ITEM-object I02 for example, it has five internal properties (Type, Model, Color, Value, Status), and is linked with two EVENT-objects through stolen and disposed link respectively. In addition to the symbolic properties and links, each object had also its object-embedding as the distributed interface with Reader. For description simplicity, we will refer to the symbolic part of this hybrid representation of objects as the Ontology, with some slight abuse of this word. Objectembedding is complementary to the symbolic part ∗We only consider flat structure of classes, but it is possible to have a hierarchy of classes with different levels of abstractness, and to allow an object to go from abstract class to its child during parsing with more information obtained. Figure 3: An example of objects of three classes. of the object, recording all the relevant information associated with it but not represented in the Ontology, e.g., the contextual information when the object is created. Both Ontology and the object embeddings will be updated in time by the classdependent operations driven by the actions issued by the Policy-net in Reader. According to the way the Ontology evolves with time, the parsing task can be roughly classified into two categories: 1) Stationary: there is a final ground truth that does not change with time, and 2) Dynamical: the truth changes with time. For stationary Ontology, see Section 5.2 and 5.3 for example, and for dynamical Ontology, please see Section 5.1. Inline Memory Inline Memory stores the relatively raw representation of the document with the sequential structure. Basically, Inline Memory is an array of memory cells, each corresponding to a pre-defined language unit (e.g., word) in the same order as they are in the original text. Each cell can have distributed part and symbolic part, designed to save the result of preprocessing of text, e.g., plain word embedding, hidden states of RNN, or some symbolic processing. Inline Memory provides a way to represent locally encoded “low level” knowledge of the text, which will be read, evaluated and combined with the global semantic representation in Carry-on Memory by Reader. One particular advantage of this setting is that it allows us to incorporate the local decisions of some other models, including “higher order” ones like local relations across multiple language units, as illustrated in Figure 4. 3.2 Reader Reader is the control center of OONP, coordinating and managing all the operations of OONP. More 2720 Figure 4: Inline Memory with symbolic knowledge. Figure 5: The overall digram of OONP specifically, it takes the input of different forms (reading), processes it (thinking), and updates the memory (writing). As shown in Figure 5, Reader contains Neural Net Controller (NNC) and multiple symbolic processors, and NNC also has Policy-net as its sub-component. Similar to the controller in Neural Turing Machine (Graves et al., 2014), NNC is equipped with multiple read-heads and write-heads for differentiable read/write over Matrix Memory and (the distributed part of) Inline Memory, with a variety of addressing strategies (Graves et al., 2014). Policy-net however issues discrete outputs (i.e., actions), which gradually builds and updates the Object Memory in time. The symbolic processors are designed to handle information in symbolic form from Object Memory, Inline Memory, Action History, and Policy-net, while that from Inline Memory and Action History is eventually generated by Policy-net. In Appendix.A†, we give a particular implementation of Reader with more details. 4 OONP: Actions The actions issued by Policy-net can be generally categorized as the following • New-Assign : determine whether to create an new object for the information at hand or assign it to a certain existed object; • Update.X : determine which internal property or link of the selected object to update; • Update2what : determine the content of the updating, which could be about string, category or links; The typical order of actions is New-Assign → Update.X → Update2what, but it is common to have New-Assign action followed by nothing, when, for example, an object is mentioned but no †The appendix is also available at https://arxiv.org/abs/1709.08853 substantial information is provided. As shown in Figure 6, we give an example of the entire episode of OONP parsing on the short text given in Figure 1, to show that a sequence of actions gradually forms the complete representation of the document. 5 An examples of actions 5.1 New-Assign With any information at hand (denoted as St) at time t, the choices of New-Assign include the following three categories of actions: 1) creating (New) an object of a certain type, 2) assigning St to an existed object, and 3) doing nothing for St and moving on. For Policy-net, the stochastic policy is to determine the following probabilities: prob(c, new|St), c = 1, 2, · · · , |C| prob(c, k|St), for Oc,k t ∈Mt obj prob(none|St) where |C| stands for the number of classes, Oc,k t stands for the kth object of class c at time t. Determining whether to new an object always relies on the following two signals 1. The information at hand cannot be contained by any existed objects; 2. Linguistic hints that suggest whether a new object is introduced. Based on those intuitions, we take a score-based approach to determine the above-mentioned probability. More specifically, for a given St, Reader forms a “temporary” object with its own structure (denoted ˆOt) with both symbolic and distributed sections. We also have a virtual object for the New action for each class c, denoted Oc,new t , which is typically a time-dependent vector formed by Reader based on information in Matrix Memory. For a given ˆOt, we can then define the following |C| + |Mt obj| + 1 types of score functions: New: score(c) new(Oc,new t , ˆ Ot; θ(c) new), c = 1, 2, · · · , |C| Assign: score(c) assign(Oc,k t , ˆ Ot; θ(c) assign), for Oc,k t ∈Mt obj Do nothing: scorenone( ˆ Ot; θnone). to measure the level of matching between the information at hand and existed objects, as well as the likeliness for creating an object or doing nothing. This process is pictorially illustrated in Figure 7. We therefore can define the following probability for the stochastic policy 2721 Figure 6: A pictorial illustration of a full episode of OONP parsing, where we assume the description of cars (highlighted with shadow) are segmented in preprocessing. prob(c, new|St) = escore(c) new(Oc,new t , ˆ Ot;θ(c) new) Z(t) (1) prob(c, k|St) = escore(c) assign(Oc,k t , ˆ Ot;θ(c) assign) Z(t) (2) prob(none|St) = escorenone( ˆ Ot;θnone) Z(t) (3) where Z(t) = P c′∈C escore(c′) new (Oc′,new t , ˆ Ot;θ(c′) new ) + P (c′′,k′)∈idx(Mt obj) e score(c′′) assign(Oc′′,k t , ˆ Ot;θ(c′′) assign) + escorenone( ˆ Ot;θnone) is the normalizing factor. 5.2 Updating Objects In Update.X step, Policy-net needs to choose the property or external link (or none) to update for the selected object determined by New-Assign step. If Update.X chooses to update an external link, Policy-net needs to further determine which object it links to. After that, Update2what updates the chosen property or links. In task with static Ontology, most internal properties and links will be “locked” after they are updated for the first time, with some exception on a few semi-structured Figure 7: A pictorial illustration of what the Reader sees in determining whether to New an object and the relevant object when the read-head on Inline Memory reaches the last word in the text in Figure 2. The color of the arrow line stands for different matching functions for object classes, where the dashed lines are for the new object. properties (e.g., the Description property in the experiment in Section 7.2). For dynamical Ontology, on the contrary, some properties and links are always subject to changes. 2722 6 Learning The parameters of OONP models (denoted Θ) include that for all operations and that for composing the distributed sections in Inline Memory. They can be trained with supervised learning (SL) , reinforcement learning (RL), and a hybrid of the two in different ways. With pure SL, the oracle gives the ground truth about the “right action” at each time step during the entire decision process, with which the parameter can be tuned to maximize the likelihood of the truth, with the following objective function JSL(Θ) = −1 N N X i Ti X t=1 log(π(i) t [a⋆ t ]) (4) where N stands for the number of instances, Ti stands for the number of steps in decision process for the ith instance, π(i) t [·] stands for the probabilities of the actions at t from the stochastic policy, and a⋆ t stands for the ground truth action in step t. With RL, the supervision is given as rewards during the decision process, for which an extreme case is to give the final reward at the end of the decision process by comparing the generated Ontology and the ground truth, e.g., r(i) t = ( 0, if t ̸= Ti match(MTi obj, Gi), if t = Ti (5) where the match(MTi obj, Gi) measures the consistency between the Ontology of in the Object Memory MTi obj and the ground truth G⋆. We can use policy search algorithm to maximize the expected total reward, e.g. the commonly used REINFORCE (Williams, 1992) for training, with the gradient ∇ΘJRL(Θ) = −Eπθ h ∇Θ log πΘ  ai t|si t  r(i) t:T i (6) ≈− 1 NTi N X i T X t=1 ∇Θ log πΘ  ai t|si t  r(i) t:Ti. (7) When OONP is applied to real-world tasks, there is often quite natural supervision signals for both SL and RL. More specifically, for static Ontology one can infer some actions from the final ontology based on some basic assumption, e.g., • the system should New an object the first time it is mentioned; • the system should put an extracted string (say, that for Name ) into the right property of right object at the end of the string. For those that can not be fully inferred, say the categorical properties of an object (e.g., Type for event objects), we have to resort to RL to determine the time of decision, while we also need SL to train Policy-net on the content of the decision. Fortunately it is quite straightforward to combine the two learning paradigms in optimization. More specifically, we maximize this combined objective J (Θ) = JSL(Θ) + λJRL(Θ), (8) where JSL and JRL are over the parameters within their own supervision mode and λ coordinates the weight of the two learning mode on the parameters they share. Equation (8) actually indicates a deep coupling of supervised learning and reinforcement learning, since for any episode the samples of actions related to RL might affect the inputs to the models under supervised learning. For dynamical Ontology (see Section 7.1 for example), it is impossible to derive most of the decisions from the final Ontology since they may change over time. For those we have to rely mostly on the supervision at the time step to train the action (supervised mode) or count on OONP to learn the dynamics of the ontology evolution by fitting the final ground truth. Both scenarios are discussed in Section 7.1 on a synthetic task. 7 Experiments We applied OONP on three document parsing tasks, to verify its efficacy on parsing documents with different characteristics and investigate different components of OONP. 7.1 Task-I: bAbI Task Data and Task We implemented OONP on enriched version of bAbI tasks (Johnson, 2017) with intermediate representation for history of arbitrary length. In this experiment, we considered only the original bAbi task-2 (Weston et al., 2015), with an instance shown in the left panel Figure 8. The ontology has three types of objects: PERSON-object, ITEMobject, and LOCATION-object, and three types of links specifying relations between them (see Figure 8 for an illustration). All three types of objects have Name as the only internal property. The task for OONP is to read an episode of story and recover the trajectory of the evolving ontology. We choose bAbI for its dynamical ontology that evolves with time and ground truth given for each snapshot. Comparing with the real-world tasks we will present later, bAbi has almost trivial internal properties but relatively rich opportunities for links, considering that any two objects of different types could potentially have a link. 2723 Figure 8: One instance of bAbI (6-sentence episode) and the ontology of two snapshots. Action Description NewObject(c) New an object of class-c. AssignObject(c, k) Assign the current information to existed object (c, k) Update(c, k).AddLink(c′, k′, ℓ) Add an link of type-ℓfrom object-(c, k) to object-(c′, k′) Update(c, k).DelLink(c′, k′, ℓ) Delete the link of type-ℓfrom object-(c, k) to object-(c′, k′) Table 1: Actions for bAbI. Implementation Details For preprocessing, we have a trivial NER to find the names of people, items and locations (saved in the symbolic part of Inline Memory) and wordlevel bi-directional GRU for the distributed representations of Inline Memory. In the parsing process, Reader goes through the inline word-byword in the temporal order of the original text, makes New-Assign action at every word, leaving Update.X and Update2what actions to the time steps when the read-head on Inline Memory reaches a punctuation (see more details of actions in Table 1). For this simple task, we use an almost fully neural Reader (with MLPs for Policy-net) and a vector for Matrix Memory, with however a Symbolic Reasoner to maintain the logical consistency after updating the relations with the actions (see Appendx.B for more details). Results and Analysis For training, we use 1,000 episodes with length evenly distributed from one to six. We use just REINFORCE with only the final reward defined as the overlap between the generated ontology and the ground truth, while step-by-step supervision on actions yields almost perfect result (result omitted). For evaluation, we use the F1 (Rijsbergen, 1979) between the generated links and the ground truth averaged over all snapshots of all test instances, since the links are sparse compared with all the possible pairwise relations between objects, with which we get F1= 94.80% without Symbolic Reasoner and F1= 95.30% with it. Clearly OONP can learn fairly well on recovering the evolving ontology with such a small training set and weak supervision (RL with the final reward), showing that the credit assignment over Figure 9: Example of police report & its ontology. to earlier snapshots does not cause much difficulty in the learning of OONP even with a generic policy search algorithm. It is not so surprising to observe that Symbolic Reasoner helps to improve the results on discovering the links, while it does not improve the performance on identifying the objects although it is taken within the learning. 7.2 Task-II: Parsing Police Report Data & Task We implement OONP for parsing Chinese police report (brief description of criminal cases written by policeman), as illustrated in the left panel of Figure 9. We consider a corpus of 5,500 cases with a variety of crime categories, including theft, robbery, drug dealing and others. Although the language is reasonably formal, the corpus covers a big variety of topics and language styles, and has a high proportion of typos. The ontology we designed for this task mainly consists of a number of PERSON-objects and ITEM-objects connected through an EVENT-object with several types of relations, as illustrated in the right panel of Figure 9. A PERSON-object has three internal properties: Name (string), Gender (categorical) and Age (number), and two types of external links (suspect and victim) to an EVENTobject. An ITEM-object has three internal properties: Name (string), Quantity (string) and Value (string), and six types of external links (stolen, drug, robbed, swindled, damaged, and other) to an EVENT-object. On average, a sample has 95.24 Chinese words and the ontology has 3.35 objects, 3.47 mentions and 5.02 relationships. Compared with bAbI in Section 7.1, the police report ontology has less pairwise links but much richer internal properties for objects of all three objects. Implementation Details The OONP model is to generate the ontology as illustrated in Figure 9 through a decision process with actions in Table 2. As pre-processing, we performed third party NER algorithm to find peo2724 ple names, locations, item etc. For the distributed part of Inline Memory, we used dilated CNN with different choices of depth and kernel size (Yu and Koltun, 2016), all of which will be jointly learned during training. In updating objects with its stringtype properties (e.g., Name for a PERSON-object ), we use Copy-Paste strategy for extracted string (whose NER tag already specifies which property in an object it goes to) as Reader sees it. For undetermined category properties in existed objects, Policy-net will determine the object to update (a New-Assign action without New option), its property to update (an Update.X action), and the updating operation (an Update2what action) at milestones of the decision process , e.g., when reaching an punctuation. For this task, since all the relations are between the single by-default EVENT-object and other objects, the relations can be reduced to category-type properties of the corresponding objects in practice. For category-type properties, we cannot recover New-Assign and Update.X actions from the label (the final ontology), so we resort RL for learning to determine that part, which is mixed with the supervised learning for Update2what and other actions for string-type properties. Action Description NewObject(c) New an object of class-c. AssignObject(c, k) Assign the current information to existed object (c, k) UpdateObject(c, k).Name Set the name of object-(c, k) with the extracted string. UpdateObject(PERS ON, k).Gender Set the name of a PERSON-object indexed k with the extracted string. UpdateObject(ITEM, k).Quantity Set the quantity of an ITEM-object indexed k with the extracted string. UpdateObject(ITEM, k).Value Set the value of an ITEM-object indexed k with the extracted string. UpdateObject(EVEN T, 1).Items.x Set the link between the EVENT-object and an ITEM-object, where x ∈{stolen, drug, robbed, swindled, damaged, other} UpdateObject(EVEN T, 1).Persons.x Set the link between the EVENT-object and an PERSON-object, and x ∈{victim, suspect} Table 2: Actions for parsing police report. Results & Discussion We use 4,250 cases for training, 750 for validation an held-out 750 for test. We consider the following four metrics in comparing the performance of different models: Assignment Accuracy the accuracy on New-Assign actions made by the model Category Accuracy the accuracy of predicting the category properties of all the objects Ontology Accuracy the proportion of instances for which the generated Objects is exactly the same as the ground truth Ontology Accuracy-95 the proportion of instances for which the generated Objects achieves 95% consistency with the ground truth which measures the accuracy of the model in making discrete decisions as well as generating the final ontology. Model Assign Acc. (%) Type Acc. (%) Ont. Acc. (%) Ont. Acc-95 (%) Bi-LSTM (baseline) 73.2 ± 0.58 36.4± 1.56 59.8 ± 0.83 ENTITYNLM (baseline) 87.6 ± 0.50 84.3 ± 0.80 59.6 ± 0.85 72.3 ± 1.37 OONP (neural) 88.5 ± 0.44 84.3 ± 0.58 61.4 ± 1.26 75.2 ± 1.35 OONP (structured) 91.2 ± 0.62 87.0 ± 0.40 65.4 ± 1.42 79.9 ± 1.28 OONP (RL) 91.4 ± 0.38 87.8 ± 0.75 66.7 ± 0.95 80.7 ± 0.82 Table 3: OONP on parsing police reports. We empirically investigated two competing models, Bi-LSTM and EntityNLM , as baselines. Both models can be viewed as simplified versions of OONP. Bi-LSTM consists of a bi-directional LSTM as Inline Memory encoder and a two-layer MLP on top of that as Policy-net. Bi-LSTM does not support categorical prediction for objects due to the lack of explicit object representation, which will only be trained to perform New-Assign actions and evaluated on them (with the relevant metrics modified for it). EntityNLM, on the other hand, has some modest capability for modeling entities with the original purpose of predicting entity mentions (Ji et al., 2017) which has been adapted and re-implemented for this scenario. For OONP , we consider three variants: • OONP (neural): simple version of OONP with only distributed representation in Reader; • OONP (structured): OONP that considers the matching between two structured objects in New-Assign actions; • OONP (RL): another version of OONP (structured) that uses RL‡ to determine the time for predicting the category properties, while OONP (neural) and OONP (structured) use a rule-based approach to determine the time. The experimental results are given in Table 3. As shown in Table 3, Bi-LSTM struggles to achieve around 73% Assignment Accuracy on test set, while OONP (neural) can boost the performance to 88.5%. Arguably, this difference in performance is due to the fact that Bi-LSTM lacks Object Memory, so all relevant information has to be stored in the Bi-LSTM hidden states along the reading process. When we start putting symbolic representation and operation into Reader, as shown in the result of OONP (structure), the performance is again significantly improved on all four metrics. From the result of OONP (RL), RL improves not only the prediction of categorical property (and hence the overall ontology accuracy) but also tasks trained with purely SL (i.e., learning the New-Assign actions). This indicates there might be some deep entanglement between SL and RL through the obvious interaction between features in parsing and/or sharing of parameters. 7.3 Task-III: Parsing court judgment docs Data and Task Comparing with Task-II, court judgements are typically much longer, containing multiple events ‡ A more detailed exposition of this idea can be found in (Liu et al., 2018), where RL is used for training a multi-label classifier of text 2725 Figure 10: Left: the judgement document with highlighted part being the description the facts of crime; right: the corresponding ontology of different types and large amount of irrelevant text. The dataset contains 4056 Chinese judgement documents, divided into training/dev/testing set 3256/400/400 respectively. The ontology for this task mainly consists of a number of PERSON-objects and ITEM-objects connected through a number EVENT-object with several types of links. An EVENT-object has three internal properties: Time (string), Location (string), and Type (category, ∈{theft, restitution, disposal}), four types of external links to PERSON-objects (namely, principal, companion, buyer, victim) and four types of external links to ITEM-objects (stolen, damaged, restituted, disposed ). In addition to the external links to EVENT-objects , a PERSON-object has only the Name (string) as the internal property. An ITEM-object has three internal properties: Description (array of strings), Value (string) and Returned(binary) in addition to its external links to EVENT-objects , where Description consists of the words describing the corresponding item, which could come from multiple segments across the document. An object could be linked to more than one EVENT-object, for example a person could be the principal suspect in event A and also a companion in event B. An illustration of the judgement document and the corresponding ontology can be found in Figure 10. Implementation Details We use a model configuration similar to that in Section 7.2, with event-based segmentation of text given by third-party extraction algorithm (Yan et al., 2017) in Inline Memory, which enables OONP to trivially New EVENT-objectwith rules. OONP reads the Inline Memory, fills the EVENTobjects, creates and fills PERSON-objects and ITEM-objects, and specifies the links between them, with the actions summarized in Table 4. When an object is created during a certain event, it will be given an extra feature (not an internal property) indicating this connection, which will be used in deciding links between this object and event object, as well as in determining the future New-Assign actions. Action for 2nd-round Description NewObject(c) New an object of class-c. AssignObject(c, k) Assign the current information to existed object (c, k) UpdateObject(PER SON, k).Name Set the name of the kth PERSON-object with the extracted string. UpdateObject(ITE M, k).Description Add to the description of an kth ITEM-object with the extracted string. UpdateObject(ITE M, k).Value Set the value of an kth ITEM-object with the extracted string. UpdateObject(EVE NT, k).Time Set the time of an kth EVENT-object with the extracted string. UpdateObject(EVE NT, k).Location Set the location of an kth EVENT-object with the extracted string. UpdateObject(EVE NT, k).Type Set the type of the kth EVENT-object among {theft, disposal, restitution} UpdateObject(EVE NT, k).Items.x Set the link between the kth EVENT-object and an ITEM-object, where x ∈{stolen, damaged, restituted, disposed } UpdateObject(EVE NT, k).Persons.x Set the link between the kth EVENT-object and an PERSON-object, and x ∈{principal, companion, buyer, victim} Table 4: Actions for parsing court judgements. Results and Analysis We use the same metric as in Section 7.2, and compare two OONP variants, OONP (neural) and OONP (structured), with two baselines EntityNLM and BiLSTM. The two baselines will be tested only on the second-round reading, while both OONP variants are tested on a two-round reading. The results are shown in Table 5. OONP parsers attain accuracy significantly higher than Bi-LSTM. Among, OONP (structure) achieves over 71% accuracy on getting the entire ontology right and over 77% accuracy on getting 95% consistency with the ground truth. We omitted the RL results since the model RL model chooses to predict the type properties same as the simple rules. Model Assign Acc. (%) Type Acc. (%) Ont. Acc. (%) Ont. Acc-95 (%) Bi-LSTM (baseline) 84.66 ± 0.20 18.20 ± 0.74 36.88 ± 1.01 ENTITYNLM (baseline) 90.50 ± 0.21 96.33 ± 0.39 39.85 ± 0.20 48.29 ± 1.96 OONP (neural) 94.50 ± 0.24 97.73 ± 0.12 53.29 ± 0.26 72.22 ± 1.01 OONP (structured) 96.90 ± 0.22 98.80 ± 0.08 71.11 ± 0.54 77.27 ± 1.05 Table 5: OONP on judgement documents. 8 Conclusion We proposed Object-oriented Neural Programming (OONP), a framework for semantically parsing in-domain documents. OONP is neural netbased, but equipped with sophisticated architecture and mechanism for document understanding, therefore nicely combining interpretability and learnability. Experiments on both synthetic and real-world datasets have shown that OONP outperforms several strong baselines by a large margin on parsing fairly complicated ontology. Acknowledgments We thank Fandong Meng and Hao Xiong for their insightful discussion. We also thank Classic Law Institute for providing the raw data. 2726 References Tim Berners-Lee, James Hendler, and Ora Lassila. 2001. The semantic web. Scientific American 284(5):34–43. Jan Buys and Phil Blunsom. 2017. Robust incremental neural semantic graph parsing. In Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics(ACL). pages 1215–1226. Fillmore Charles J. 1982. Frame semantics. In Linguistics in the Morning Calm pages 111–137. Hal Daum´e III, John Langford, and Daniel Marcu. 2009. Search-based structured prediction. Machine Learning Journal (MLJ) . Alex Graves, Greg Wayne, and Ivo Danihelka. 2014. Neural turing machines. CoRR abs/1410.5401. Mikael Henaff, Jason Weston, Arthur Szlam, Antoine Bordes, and Yann LeCun. 2017. Tracking the world state with recurrent entity networks. In ICLR. Henderson, Matthew, Blaise Thomson, , and Steve Young. 2014. Word-based dialog state tracking with recurrent neural networks. In Proceedings of the 15th Annual Meeting of the Special Interest Group on Discourse and Dialogue (SIGDIAL). pages 292– 299. Jonathan Herzig and Jonathan Berant. 2017. Neural semantic parsing over multiple knowledge-bases. In Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics(ACL). pages 623–628. Yangfeng Ji, Chenhao Tan, Sebastian Martschat, Yejin Choi, and Noah A. Smith. 2017. Dynamic entity representations in neural language models. In Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing(EMNLP). Association for Computational Linguistics, pages 1830–1839. Daniel D. Johnson. 2017. Learning graphical state transitions. In the International Conference on Learning Representations(ICLR). Berant Jonathan and Liang Percy. 2014. Semantic parsing via paraphrasing. In Association for Computational Linguistics (ACL). Chen Liang, Jonathan Berant, Quoc Le, Kenneth D Forbus, and Ni Lao. 2017. Neural symbolic machines: Learning semantic parsers on freebase with weak supervision. In Association for Computational Linguistics(ACL). Xianggen Liu, Lili Mou, Haotian Cui, Zhengdong Lu, and Sen Song. 2018. Jumper: Learning when to make classification decisions in reading. In IJCAI. C. J. Van Rijsbergen. 1979. Information Retrieval. Butterworth-Heinemann, Newton, MA, USA, 2nd edition. Jason Weston, Antoine Bordes, Sumit Chopra, and Tomas Mikolov. 2015. Towards ai-complete question answering: A set of prerequisite toy tasks. CoRR abs/1502.05698. Ronald J Williams. 1992. Simple statistical gradientfollowing algorithms for connectionist reinforcement learning. Machine Learning 8:229–256. Yukun Yan, Daqi Zheng, Zhengdong Lu, and Sen Song. 2017. Event identification as a decision process with non-linear representation of text. CoRR abs/1710.00969. Zichao Yang, Phil Blunsom, Chris Dyer, and Wang Ling. 2017. Reference-aware language models. In Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing(EMNLP). Association for Computational Linguistics, pages 1850–1859. Fisher Yu and Vladlen Koltun. 2016. Multi-scale context aggregation by dilated convolutions. In ICLR. Daojian Zeng, Kang Liu, Siwei Lai, Guangyou Zhou, and Jun Zhao. 2014. Relation classification via convolutional deep neural network. In Proceedings of COLING .
2018
253
Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Long Papers), pages 2727–2736 Melbourne, Australia, July 15 - 20, 2018. c⃝2018 Association for Computational Linguistics 2727 Finding Syntax in Human Encephalography with Beam Search John Hale♠,△Chris Dyer♠Adhiguna Kuncoro♠,♣Jonathan R. Brennan♦ ♠DeepMind, London, UK ♣Department of Computer Science, University of Oxford ♦Department of Linguistics, University of Michigan △Department of Linguistics, Cornell University {jthale,cdyer,akuncoro}@google.com [email protected] Abstract Recurrent neural network grammars (RNNGs) are generative models of (tree, string) pairs that rely on neural networks to evaluate derivational choices. Parsing with them using beam search yields a variety of incremental complexity metrics such as word surprisal and parser action count. When used as regressors against human electrophysiological responses to naturalistic text, they derive two amplitude effects: an early peak and a P600-like later peak. By contrast, a non-syntactic neural language model yields no reliable effects. Model comparisons attribute the early peak to syntactic composition within the RNNG. This pattern of results recommends the RNNG+beam search combination as a mechanistic model of the syntactic processing that occurs during normal human language comprehension. 1 Introduction Computational psycholinguistics has “always been...the thing that computational linguistics stood the greatest chance of providing to humanity” (Kay, 2005). Within this broad area, cognitively-plausible parsing models are of particular interest. They are mechanistic computational models that, at some level, do the same task people do in the course of ordinary language comprehension. As such, they offer a way to gain insight into the operation of the human sentence processing mechanism (for a review see Hale, 2017). As Keller (2010) suggests, a promising place to look for such insights is at the intersection of (a) incremental processing, (b) broad coverage, and (c) neural signals from the human brain. The contribution of the present paper is situated precisely at this intersection. It combines a probabilistic generative grammar (RNNG; Dyer et al., 2016) with a parsing procedure that uses this grammar to manage a collection of syntactic derivations as it advances from one word to the next (Stern et al., 2017, cf. Roark, 2004). Via well-known complexity metrics, the intermediate states of this procedure yield quantitative predictions about language comprehension difficulty. Juxtaposing these predictions against data from human encephalography (EEG), we find that they reliably derive several amplitude effects including the P600, which is known to be associated with syntactic processing (e.g. Osterhout and Holcomb, 1992). Comparison with language models based on long short term memory networks (LSTM, e.g. Hochreiter and Schmidhuber, 1997; Mikolov, 2012; Graves, 2012) shows that these effects are specific to the RNNG. A further analysis pinpoints one of these effects to RNNGs’ syntactic composition mechanism. These positive findings reframe earlier null results regarding the syntaxsensitivity of human processing (Frank et al., 2015). They extend work with eyetracking (e.g. Roark et al., 2009; Demberg et al., 2013) and neuroimaging (Brennan et al., 2016; Bachrach, 2008) to higher temporal resolution.1 Perhaps most significantly, they establish a general correspondence between a computational model and electrophysiological responses to naturalistic language. Following this Introduction, section 2 presents recurrent neural network grammars, emphasizing their suitability for incremental parsing. Sections 3 then reviews a previously-proposed 1Magnetoencephalography also offers high temporal resolution and as such this work fits into a tradition that includes Wehbe et al. (2014), van Schijndel et al. (2015), Wingfield et al. (2017) and Brennan and Pylkk¨anen (2017). 2728 (S the NP hungry cat (VP “stack” LSTM current state vector of “stack” LSTM open a new phrase generate a word close off phrase Prob(action) NP VP PP S … … Prob(nonterminal) cat Prob(word) … … dog eat hungry the chase Figure 1: Recurrent neural network grammar configuration used in this paper. The absence of a lookahead buffer is significant, because it forces parsing to be incremental. Completed constituents such as [NP the hungry cat ] are represented on the stack by numerical vectors that are the output of the syntactic composition function depicted in Figure 2. beam search procedure for them. Section 4 goes on to introduce the novel application of this procedure to human data via incremental complexity metrics. Section 5 explains how these theoretical predictions are specifically brought to bear on EEG data using regression. Sections 6 and 7 elaborate on the model comparison mentioned above and report the results in a way that isolates the operative element. Section 8 discusses these results in relation to established computational models. The conclusion, to anticipate section 9, is that syntactic processing can be found in naturalistic speech stimuli if ambiguity resolution is modeled as beam search. 2 Recurrent neural network grammars for incremental processing Recurrent neural network grammars (henceforth: RNNGs Kuncoro et al., 2017; Dyer et al., NP u v w NP u v w NP x x Figure 2: RNNG composition function traverses daughter embeddings u, v and w, representing the entire tree with a single vector x. This Figure is reproduced from (Dyer et al., 2016). 2016) are probabilistic models that generate trees. The probability of a tree is decomposed via the chain rule in terms of derivational actionprobabilities that are conditioned upon previous actions i.e. they are history-based grammars (Black et al., 1993). In the vanilla version of RNNG, these steps follow a depth-first traversal of the developing phrase structure tree. This entails that daughters are announced bottom-up one by one as they are completed, rather than being predicted at the same time as the mother. Each step of this generative story depends on the state of a stack, depicted inside the gray box in Figure 1. This stack is “neuralized” such that each stack entry corresponds to a numerical vector. At each stage of derivation, a single vector summarizing the entire stack is available in the form of the final state of a neural sequence model. This is implemented using the stack LSTMs of Dyer et al. (2015). These stack-summary vectors (central rectangle in Figure 1) allow RNNGs to be sensitive to aspects of the left context that would be masked by independence assumptions in a probabilistic context-free grammar. In the present paper, these stack-summaries serve as input to a multi-layer perceptron whose output is converted via softmax into a categorical distribution over three possible parser actions: open a new constituent, close off the latest constituent, or generate a word. A hard decision is made, and if the first or last option is selected, then the same vector-valued stack–summary is again used, via multilayer perceptrons, to decide which specific nonterminal to open, or which specific word to generate. Phrase-closing actions trigger a syntactic composition function (depicted in Figure 2) which 2729 squeezes a sequence of subtree vectors into one single vector. This happens by applying a bidirectional LSTM to the list of daughter vectors, prepended with the vector for the mother category following §4.1 of Dyer et al. (2016). The parameters of all these components are adaptively adjusted using backpropagation at training time, minimizing the cross entropy relative to a corpus of trees. At testing time, we parse incrementally using beam search as described below in section 3. 3 Word-synchronous beam search Beam search is one way of addressing the search problem that arises with generative grammars — constructive accounts of language that are sometimes said to “strongly generate” sentences. Strong generation in this sense simply means that they derive both an observable word-string as well as a hidden tree structure. Probabilistic grammars are joint models of these two aspects. By contrast, parsers are programs intended to infer a good tree from a given word-string. In incremental parsing with history-based models this inference task is particularly challenging, because a decision that looks wise at one point may end up looking foolish in light of future words. Beam search addresses this challenge by retaining a collection called the “beam” of parser states at each word. These states are rated by a score that is related to the probability of a partial derivation, allowing an incremental parser to hedge its bets against temporary ambiguity. If the score of one analysis suddenly plummets after seeing some word, there may still be others within the beam that are not so drastically affected. This idea of ranked parallelism has become central in psycholinguistic modeling (see e.g. Gibson, 1991; Narayanan and Jurafsky, 1998; Boston et al., 2011). As Stern et al. (2017) observe, the most straightforward application of beam search to generative models like RNNG does not perform well. This is because lexical actions, which advance the analysis onwards to successive words, are assigned such low probabilities compared to structural actions which do not advance to the next word. This imbalance is inevitable in a probability model that strongly generates sentences, and it causes naive beam-searchers to get bogged down, proposing more and more phrase structure rather than moving on through the sentence. To address it, Stern et al. (2017) propose a word-synchronous variant of beam search. This variant keeps searching through structural actions until “enough” high-scoring parser states finally take a lexical action, arriving in synchrony at the next word of the sentence. Their procedure is written out as Algorithm 1. Algorithm 1 Word-synchronous beam search with fast-tracking. After Stern et al. (2017) 1: thisword ←input beam 2: nextword ←∅ 3: while |nextword| < k do 4: fringe ←successors of all states s ∈thisword via any parsing action 5: prune fringe to top k 6: thisword ←∅ 7: for each parser state s ∈fringe do 8: if s came via a lexical action then 9: add s to nextword 10: else ▷must have been structural 11: add s to thisword 12: end if 13: end for 14: end while 15: return nextword pruned to top kword ≪k In Algorithm 1 the beam is held in a set-valued variable called nextword. Beam search continues until this set’s cardinality exceeds the designated action beam size, k. If the beam still isn’t large enough (line 3) then the search process explores one more action by going around the while-loop again. Each time through the loop, lexical actions compete against structural actions for a place among the top k (line 5). The imbalance mentioned above makes this competition fierce, and on many loop iterations nextword may not grow by much. Once there are enough parser states, another threshold called the word beam kword kicks in (line 15). This other threshold sets the number of analyses that are handed off to the next invocation of the algorithm. In the study reported here the word beam remains at the default setting suggested by Stern and colleagues, k/10. Stern et al. (2017) go on to offer a modification of the basic procedure called “fast tracking” which improves performance, particularly when the action beam k is small. Under fast tracking, an additional step is added between lines 4 and 5 of 2730 k=100 k=200 k=400 k=600 k=800 k=1000 k=2000 Fried et al. (2017) RNNG ppl unknown, −fast track 74.1 80.1 85.3 87.5 88.7 89.6 not reported this paper ppl=141, −fast track 71.5 78.81 84.15 86.42 87.34 88.16 89.81 this paper ppl=141, kft = k/100 87.1 88.96 90.48 90.64 90.84 90.96 91.25 Table 1: Penn Treebank development section bracketing accuracies (F1) under Word-Synchronous beam search. These figures show that an incremental parser for RNNG can perform well on a standard benchmark. “ppl” indicates the perplexity of over both trees and strings for the trained model on the development set, averaged over words In all cases the word beam is set to a tenth of the action beam, i.e. kword = k/10. Algorithm 1 such that some small number kft of parser states are promoted directly into nextword. These states are required to come via a lexical action, but in the absence of fast tracking they quite possibly would have failed the thresholding step in line 5. Table 1 shows Penn Treebank accuracies for this word-synchronous beam search procedure, as applied to RNNG. As expected, accuracy goes up as the parser considers more and more analyses. Above k = 200, the RNNG+beam search combination outperforms a conditional model based on greedy decoding (88.9). This demonstration reemphasizes the point, made by Brants and Crocker (2000) among others, that cognitively-plausible incremental processing can be achieved without loss of parsing performance. 4 Complexity metrics In order to relate computational models to measured human responses, some sort of auxiliary hypothesis or linking rule is required. In the domain of language, these are traditionally referred to as complexity metrics because of the way they quantify the “processing complexity” of particular sentences. When a metric offers a prediction on each successive word, it is an incremental complexity metric. Table 2 characterizes four incremental complexity metrics that are all obtained from intermediate states of Algorithm 1. The metric denoted DISTANCE is the most classic; it is inspired by the count of “transitions made or attempted” in Kaplan (1972). It quantifies syntactic work by counting the number of parser actions explored by Algorithm 1 between each word i.e. the number of times around the while-loop on line 3. The information theoretical quantities SURPRISAL and ENTROPY came into more widespread use later. They quantify unexpectedness and uncertainty, respectively, about alternative syntactic analyses at a given point within a sentence. Hale (2016) reviews their applicability across many different languages, psycholinguistic measurement techniques and grammatical models. Recent work proposes possible relationships between these two metrics, at the empirical as well as theoretical level (van Schijndel and Schuler, 2017; Cho et al., 2018). metric characterization DISTANCE count of actions required to synchronize k analyses at the next word SURPRISAL log-ratio of summed forward probabilities for analyses in the beam ENTROPY average uncertainty of analyses in the beam ENTROPY ∆ difference between previous and current entropy value Table 2: Complexity Metrics The SURPRISAL metric was computed over the word beam i.e. the kword highest-scoring partial syntactic analyses at each successive word. In an attempt to obtain a more faithful estimate, ENTROPY and its first-difference are computed over nextword itself, whose size varies but is typically much larger than kword. 5 Regression models of naturalistic EEG Electroencephalography (EEG) is an experimental technique that measures very small voltage fluctuations on the scalp. For a review emphasizing its implications vis-´a-vis computational models, see Murphy et al. (2018). We analyzed EEG recordings from 33 participants as they passively listened to a 2731 spoken recitation of the first chapter of Alice’s Adventures in Wonderland.2 This auditory stimulus was delivered via earphones in an isolated booth. All participants scored significantly better than chance on a post-session 8-question comprehension quiz. An additional ten datasets were excluded for not meeting this behavioral criterion, six due to excessive noise, and three due to experimenter error. All participants provided written informed consent under the oversight of the University of Michigan HSBS Institutional Review Board (#HUM00081060) and were compensated $15/h.3 Data were recorded at 500 Hz from 61 active electrodes (impedences < 25 kΩ) and divided into 2129 epochs, spanning -0.3–1 s around the onset of each word in the story. Ocular artifacts were removed using ICA, and remaining epochs with excessive noise were excluded. The data were filtered from 0.5–40 Hz, baseline corrected against a 100 ms pre-word interval, and separated into epochs for content words and epochs for function words because of interactions between parsing variables of interest and word-class (Roark et al., 2009). Linear regression was used per-participant, at each time-point and electrode, to identify content-word EEG amplitudes that correlate with complexity metrics derived from the RNNG+beam search combination via the complexity metrics in Table 2. We refer to these time series as Target predictors. Each Target predictor was included in its own model, along with several Control predictors that are known to influence sentence processing: sentence order, word-order in sentence, log word frequency (Lund and Burgess, 1996), frequency of the previous and subsequent word, and acoustic sound power averaged over the first 50 ms of the epoch. All predictors were mean-centered. We also constructed null regression models in which the rows of the design matrix were randomly permuted.4 β coefficients for each effect were tested against these null models at the group level across 2https://tinyurl.com/alicedata 3A separate analysis of these data appears in Brennan and Hale (2018); datasets are available from JRB. 4Temporal auto-correlation across epochs could impact model fits. Content-words are spaced 1 s apart on average and a spot-check of the residuals from these linear models indicates that they do not show temporal auto-correlation: AR(1) < 0.1 across subjects, time-points, and electrodes. all electrodes from 0–1 seconds post-onset, using a non-parametric cluster-based permutation test to correct for multiple comparisons across electrodes and time-points (Maris and Oostenveld, 2007). 6 Language models for literary stimuli We compare the fit against EEG data for models that are trained on the same amount of textual data but differ in the explicitness of their syntactic representations. At the low end of this scale is the LSTM language model. Models of this type treat sentences as a sequence of words, leaving it up to backpropagation to decide whether or not to encode syntactic properties in a learned history vector (Linzen et al., 2016). We use SURPRISAL from the LSTM as a baseline. RNNGs are higher on this scale because they explicitly build a phrase structure tree using a symbolic stack. We consider as well a degraded version, RNNG−comp which lacks the composition mechanism shown in Figure 2. This degraded version replaces the stack with initial substrings of bracket expressions, following Choe and Charniak (2016); Vinyals et al. (2015). An example would be the length 7 string shown below (S (NP the hungry cat )NP (VP Here, vertical lines separate symbols whose vector encoding would be considered separately by RNNG−comp. In this degraded representation, the noun phrase is not composed explicitly. It takes up five symbols rather than one. The balanced parentheses (NP and )NP are rather like instructions for some subsequent agent who might later perform the kind of syntactic composition that occurs online in RNNGs, albeit in an implicit manner. In all cases, these language models were trained on chapters 2–12 of Alice’s Adventures in Wonderland. This comprises 24941 words. The stimulus that participants saw during EEG data collection, for which the metrics in Table 2 are calculated, was chapter 1 of the same book, comprising 2169 words. RNNGs were trained to match the output trees provided by the Stanford parser (Klein and Manning, 2003). These trees conform to the Penn Treebank annotation standard but do not explicitly mark long-distance dependency or include any empty categories. They seem to adequately represent basic syntactic properties such 2732 as clausal embedding and direct objecthood; nevertheless we did not undertake any manual correction. During RNNG training, the first chapter was used as a development set, proceeding until the per-word perplexity over all parser actions on this set reached a minimum, 180. This performance was obtained with a RNNG whose state vector was 170 units wide. The corresponding LSTM language model state vector had 256 units; it reached a per-word perplexity of 90.2. Of course the RNNG estimates the joint probability of both trees and words, so these two perplexity levels are not directly comparable. Hyperparameter settings were determined by grid search in a region near the one which yielded good performance on the Penn Treebank benchmark reported on Table 1. 7 Results To explore the suitability of the RNNG + beam search combination as a cognitive model of language processing difficulty, we fitted regression models as described above in section 5 for each of the metrics in Table 2. We considered six beam sizes k = {100, 200, 400, 600, 800, 1000}. Table 3 summarizes statistical significance levels reached by these Target predictors; no other combinations reached statistical significance. LSTM not significant SURPRISAL k = 100 pcluster = 0.027 DISTANCE k = 200 pcluster = 0.012 SURPRISAL k = 200 pcluster = 0.003 DISTANCE k = 400 pcluster = 0.002 SURPRISAL k = 400 pcluster = 0.049 ENTROPY ∆ k = 400 pcluster = 0.026 DISTANCE k = 600 pcluster = 0.012 ENTROPY k = 600 pcluster = 0.014 Table 3: Statistical significance of fitted Target predictors in Whole-Head analysis. pcluster values are minima for each Target with respect to a Monte Carlo cluster-based permutation test (Maris and Oostenveld, 2007). 7.1 Whole-Head analysis Surprisal from the LSTM sequence model did not reliably predict EEG amplitude at any timepoint or electrode. The DISTANCE predictor did derive a central positivity around 600 ms post-word onset as shown in Figure 3a. SURPRISAL predicted an early frontal positivity around 250 ms, shown in Figure 3b. ENTROPY and ENTROPY ∆seemed to drive effects that were similarly early and frontal, although negative-going (not depicted); the effect for ENTROPY ∆localized to just the left side. 7.2 Region of Interest analysis We compared RNNG to its degraded cousin, RNNG−comp, in three regions of interest shown in Figure 4. These regions are defined by a selection of electrodes and a time window whose zero-point corresponds to the onset of the spoken word in the naturalistic speech stimulus. Regions “N400” and “P600” are well-known in EEG research, while “ANT” is motivated by findings with a PCFG baseline reported by Brennan and Hale (2018). Single-trial data were averaged across electrodes and time-points within each region and fit with a linear mixed-effects model with fixed effects as described below and random intercepts by-subjects (Alday et al., 2017). We used a stepwise likelihood-ratio test to evaluate whether individual Target predictors from the RNNG significantly improved over RNNG−comp, and whether a RNNG−comp model significantly improve a baseline regression model. The baseline regression model, denoted ∅, contains the Control predictors described in section 5 and SURPRISAL from the LSTM sequence model. Targets represent each of the eight reliable whole-head effects detailed in Table 3. These 24 tests (eight effects by three regions) motivate a Bonferroni correction of α = 0.002 = 0.05/24. Statistically significant results obtained for DISTANCE from RNNG−comp in the P600 region and for SURPRISAL for RNNG in the ANT region. No significant results were observed in the N400 region. These results are detailed in Table 4. 8 Discussion Since beam search explores analyses in descending order of probability, DISTANCE and SURPRISAL ought to be yoked, and indeed they are correlated at r = 0.33 or greater across all of the beam sizes k that we considered in this study. However they are reliably associated with different EEG effects. SURPRISAL manifests at anterior electrodes relatively early. This seems to be a different effect from that observed by Frank et al. (2015). Frank and colleagues relate N400 ampli2733 (a) DISTANCE derives a P600 at k = 200. (b) SURPRISAL derives an early response at k = 200. Figure 3: Plotted values are fitted regression coefficients and 95% confidence intervals, statistically significant in the dark-shaded region with respect to a permutation test following Maris and Oostenveld (2007). The zero point represents the onset of a spoken word. Insets show electrodes with significant effects along with grand-averaged coefficient values across the significant time intervals. The diagram averages over all content words in the first chapter of Alice’s Adventures in Wonderland. N400 300–500 ms P600 600–700 ms ANT 200–400 ms Figure 4: Regions of interest. The first region on the left, named “N400”, comprises centralposterior electrodes during a time window 300– 500 ms post-onset. The middle region, “P600” includes posterior electrodes 600–700 ms postonset. The rightmost region “ANT” consists of just anterior electrodes 200-400 ms post-onset. tude to word surprisals from an Elman-net, analogous to the LSTM sequence model evaluated in this work. Their study found no effects of syntax-based predictors over and above sequential ones. In particular, no effects emerged in the 500–700 ms window, where one might have expected a P600. The present results, by contrast, show that an explicitly syntactic model can derive the P600 quite generally via DISTANCE. The absence of an N400 effect in this analysis could be attributable to the choice of electrodes, or perhaps the modality of the stimulus narrative, i.e. spoken versus read. The model comparisons in Table 4 indicate that the early peak, but not the later one, is attributable to the RNNG’s composition function. Choe and Charniak’s (2016) “parsing as language modeling” scheme potentially could explain the P600like wave, but it would not account for the earlier peak. This earlier peak is the one derived by the RNNG under SURPRISAL, but only when the RNNG includes the composition mechanism depicted in Figure 2. This pattern of results suggests an approach to the overall modeling task. In this approach, both grammar and processing strategy remain the same, and alternative complexity metrics, such as SURPRISAL and DISTANCE, serve to interpret the unified model at different times or places within the brain. This inverts the approach of Brouwer et al. (2017) and Wehbe et al. (2014) who interpret different layers of the same neural net using the same complexity metric. 9 Conclusion Recurrent neural net grammars indeed learn something about natural language syntax, and what they learn corresponds to indices of human language processing difficulty that are manifested in electroencephalography. This correspondence, between computational model and human electrophysiological response, follows from a system that lacks an initial stage of purely stringbased processing. Previous work was “two-stage” in the sense that the generative model served to 2734 RNNG−comp > ∅ RNNG > RNNG−comp χ2 df p χ2 df p DISTANCE, “P600” region k = 200 13.409 1 0.00025 4.198 1 0.04047 k = 400 15.842 1 <0.0001 3.853 1 0.04966 k = 600 13.955 1 0.00019 3.371 1 0.06635 SURPRISAL, “ANT” region k = 100 3.671 1 0.05537 13.167 1 0.00028 k = 200 3.993 1 0.04570 10.860 1 0.00098 k = 400 3.902 1 0.04824 10.189 1 0.00141 ENTROPY ∆, “ANT” region k = 400 10.141 1 0.00145 5.273 1 0.02165 Table 4: Likelihood-ratio tests indicate that regression models with predictors derived from RNNGs with syntactic composition (see Figure 2) do a better job than their degraded counterparts in accounting for the early peak in region “ANT” (right-hand columns). Similar comparisons in the “P600” region show that the model improves, but the improvement does not reach the α = 0.002 significance threshold imposed by our Bonferroni correction (bold-faced text). RNNGs lacking syntactic composition do improve over a baseline model (∅) containing lexical predictors and an LSTM baseline (left-hand columns). rerank proposals from a conditional model (Dyer et al., 2016). If this one-stage model is cognitively plausible, then its simplicity undercuts arguments for string-based perceptual strategies such as the Noun-Verb-Noun heuristic (for a textbook presentation see Townsend and Bever, 2001). Perhaps, as Phillips (2013) suggests, these are unnecessary in an adequate cognitive model. Certainly, the road is now open for more fine-grained investigations of the order and timing of individual parsing operations within the human sentence processing mechanism. Acknowledgments This material is based upon work supported by the National Science Foundation under Grants No. 1607441 and No. 1607251. We thank Max Cantor and Rachel Eby for helping with data collection. References Phillip M. Alday, Matthias Schlesewsky, and Ina Bornkessel-Schlesewsky. 2017. Electrophysiology reveals the neural dynamics of naturalistic auditory language processing: event-related potentials reflect continuous model updates. eNeuro, 4(6). Asaf Bachrach. 2008. Imaging neural correlates of syntactic complexity in a naturalistic context. Ph.D. thesis, MIT. Ezra Black, Fred Jelinek, John Lafrerty, David M. Magerman, Robert Mercer, and Salim Roukos. 1993. Towards history-based grammars: Using richer models for probabilistic parsing. In 31st Annual Meeting of the Association for Computational Linguistics. Marisa Ferrara Boston, John T. Hale, Shravan Vasishth, and Reinhold Kliegl. 2011. Parallel processing and sentence comprehension difficulty. Language and Cognitive Processes, 26(3):301–349. Thorsten Brants and Matthew Crocker. 2000. Probabilistic parsing and psychological plausibility. In Proceedings of 18th International Conference on Computational Linguistics COLING-2000, Saarbr¨ucken/Luxembourg/Nancy. Jonathan R. Brennan and John T. Hale. 2018. Hierarchical structure guides rapid linguistic predictions during naturalistic listening. Forthcoming. Jonathan R. Brennan and Liina Pylkk¨anen. 2017. MEG evidence for incremental sentence composition in the anterior temporal lobe. Cognitive Science, 41(S6):1515–1531. Jonathan R. Brennan, Edward P. Stabler, Sarah E. Van Wagenen, Wen-Ming Luh, and John T. Hale. 2016. Abstract linguistic structure correlates with temporal activity during naturalistic comprehension. Brain and Language, 157-158:81–94. Harm Brouwer, Matthew W. Crocker, Noortje J. Venhuizen, and John C. J. Hoeks. 2017. A neurocomputational model of the N400 and the P600 in language processing. Cognitive Science, 41:1318–1352. Pyeong Whan Cho, Matthew Goldrick, Richard L. Lewis, and Paul Smolensky. 2018. Dynamic encoding of structural uncertainty in gradient symbols. In 2735 Proceedings of the 8th Workshop on Cognitive Modeling and Computational Linguistics (CMCL 2018), pages 19–28. Do Kook Choe and Eugene Charniak. 2016. Parsing as language modeling. In Proceedings of the 2016 Conference on Empirical Methods in Natural Language Processing, pages 2331–2336, Austin, Texas. Vera Demberg, Frank Keller, and Alexander Koller. 2013. Incremental, predictive parsing with psycholinguistically motivated tree-adjoining grammar. Computational Linguistics, 39(4):1025–1066. Chris Dyer, Miguel Ballesteros, Wang Ling, Austin Matthews, and Noah A. Smith. 2015. Transitionbased dependency parsing with stack long shortterm memory. In Proceedings of the 53rd Annual Meeting of the Association for Computational Linguistics and the 7th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), pages 334–343. Chris Dyer, Adhiguna Kuncoro, Miguel Ballesteros, and Noah A. Smith. 2016. Recurrent neural network grammars. In Proceedings of the 2016 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 199–209, San Diego, California. Stefan L. Frank, Leun J. Otten, Giulia Galli, and Gabriella Vigliocco. 2015. The ERP response to the amount of information conveyed by words in sentences. Brain and Language, 140:1–11. Daniel Fried, Mitchell Stern, and Dan Klein. 2017. Improving neural parsing by disentangling model combination and reranking effects. In Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (Volume 2: Short Papers), pages 161–166, Vancouver, Canada. Edward Gibson. 1991. A Computational Theory of Human Linguistic Processing: Memory Limitations and Processing Breakdown. Ph.D. thesis, Carnegie Mellon University. Alex Graves. 2012. Supervised sequence labelling with recurrent neural networks. Springer. John Hale. 2016. Information-theoretical complexity metrics. Language and Linguistics Compass, 10(9):397–412. John Hale. 2017. Models of human sentence comprehension in computational psycholinguistics. In Mark Aronoff, editor, Oxford Research Encyclopedia of Linguistics. Oxford University Press. Sepp Hochreiter and J¨urgen Schmidhuber. 1997. Long short-term memory. Neural Computation, 9(8):1735–1780. Ronald M. Kaplan. 1972. Augmented transition networks as psychological models of sentence comprehension. Artificial Intelligence, 3:77–100. Martin Kay. 2005. ACL lifetime achievement award: A life of language. Computational Linguistics, 31(4). Frank Keller. 2010. Cognitively plausible models of human language processing. In Proceedings of the ACL 2010 Conference Short Papers, pages 60–67. Dan Klein and Christopher D. Manning. 2003. Accurate unlexicalized parsing. In Proceedings of the 41st Annual Meeting of the Association for Computational Linguistics, pages 423–430, Sapporo, Japan. Adhiguna Kuncoro, Miguel Ballesteros, Lingpeng Kong, Chris Dyer, Graham Neubig, and Noah A. Smith. 2017. What do recurrent neural network grammars learn about syntax? In Proceedings of the 15th Conference of the European Chapter of the Association for Computational Linguistics: Volume 1, Long Papers, pages 1249–1258. Association for Computational Linguistics. Tal Linzen, Emmanuel Dupoux, and Yoav Goldberg. 2016. Assessing the ability of LSTMs to learn syntax-sensitive dependencies. Transactions of the Association for Computational Linguistics, 4:521– 535. Kevin Lund and Curt Burgess. 1996. Producing high-dimensional semantic spaces from lexical cooccurrence. Behavior Research Methods, Instruments, & Computers, 28(2):203–208. Eric Maris and Robert Oostenveld. 2007. Nonparametric statistical testing of EEG- and MEG-data. Journal of Neuroscience Methods, 164(1):177–190. Tom´aˇs Mikolov. 2012. Statistical Language Models Based on Neural Networks. Ph.D. thesis, Brno University of Technology. Brian Murphy, Leila Wehbe, and Alona Fyshe. 2018. Decoding language from the brain. In Thierry Poibeau and Aline Editors Villavicencio, editors, Language, Cognition, and Computational Models, pages 53–80. Cambridge University Press. Srini Narayanan and Daniel Jurafsky. 1998. Bayesian models of human sentence processing. In Proceedings of the 20th Annual Conference of the Cognitive Science Society, University of Wisconsin-Madson. Lee Osterhout and Phillip J. Holcomb. 1992. Eventrelated brain potentials elicited by syntactic anomaly. Journal of Memory and Language, 31:785–806. Colin Phillips. 2013. Parser & grammar relations: We don’t understand everything twice. In Montserrat Sanz, Itziar Laka, and Michael K. Tanenhaus, editors, Language Down the Garden Path: The Cognitive and Biological Basis of Linguistic Structures, chapter 16, pages 294–315. Oxford University Press. Brian Roark. 2004. Robust garden path parsing. Natural Language Engineering, 10(1):1–24. 2736 Brian Roark, Asaf Bachrach, Carlos Cardenas, and Christophe Pallier. 2009. Deriving lexical and syntactic expectation-based measures for psycholinguistic modeling via incremental top-down parsing. In Proceedings of the 2009 Conference on Empirical Methods in Natural Language Processing, pages 324–333, Singapore. Marten van Schijndel, Brian Murphy, and William Schuler. 2015. Evidence of syntactic working memory usage in MEG data. In Proceedings of CMCL 2015, Denver, Colorado, USA. Marten van Schijndel and William Schuler. 2017. Approximations of predictive entropy correlate with reading times. In Proceedings of CogSci 2017, London, UK. Cognitive Science Society. Mitchell Stern, Daniel Fried, and Dan Klein. 2017. Effective inference for generative neural parsing. In Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing, pages 1695–1700, Copenhagen, Denmark. David J. Townsend and Thomas G. Bever. 2001. Sentence comprehension : the integration of habits and rules. MIT Press. Oriol Vinyals, Łukasz Kaiser, Terry Koo, Slav Petrov, Ilya Sutskever, and Geoffrey Hinton. 2015. Grammar as a foreign language. In Advances in Neural Information Processing Systems, pages 2773–2781. Leila Wehbe, Ashish Vaswani, Kevin Knight, and Tom Mitchell. 2014. Aligning context-based statistical models of language with brain activity during reading. In Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing (EMNLP), pages 233–243, Doha, Qatar. Cai Wingfield, Li Su, Xunying Liu, Chao Zhang, Phil Woodland, Andrew Thwaites, Elisabeth Fonteneau, and William D. Marslen-Wilson. 2017. Relating dynamic brain states to dynamic machine states: Human and machine solutions to the speech recognition problem. PLOS Computational Biology, 13(9):1– 25.
2018
254
Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Long Papers), pages 2737–2746 Melbourne, Australia, July 15 - 20, 2018. c⃝2018 Association for Computational Linguistics 2737 Learning to Ask Good Questions: Ranking Clarification Questions using Neural Expected Value of Perfect Information Sudha Rao University of Maryland, College Park [email protected] Hal Daum´e III University of Maryland, College Park Microsoft Research, New York City [email protected] Abstract Inquiry is fundamental to communication, and machines cannot effectively collaborate with humans unless they can ask questions. In this work, we build a neural network model for the task of ranking clarification questions. Our model is inspired by the idea of expected value of perfect information: a good question is one whose expected answer will be useful. We study this problem using data from StackExchange, a plentiful online resource in which people routinely ask clarifying questions to posts so that they can better offer assistance to the original poster. We create a dataset of clarification questions consisting of ∼77K posts paired with a clarification question (and answer) from three domains of StackExchange: askubuntu, unix and superuser. We evaluate our model on 500 samples of this dataset against expert human judgments and demonstrate significant improvements over controlled baselines. 1 Introduction A principle goal of asking questions is to fill information gaps, typically through clarification questions.1 We take the perspective that a good question is the one whose likely answer will be useful. Consider the exchange in Figure 1, in which an initial poster (who we call “Terry”) asks for help configuring environment variables. This post is underspecified and a responder (“Parker”) asks a clarifying question (a) below, but could alternatively have asked (b) or (c): (a) What version of Ubuntu do you have? 1We define ‘clarification question’ as a question that asks for some information that is currently missing from the given context. Figure 1: A post on an online Q & A forum “askubuntu.com” is updated to fill the missing information pointed out by the question comment. (b) What is the make of your wificard? (c) Are you running Ubuntu 14.10 kernel 4.4.0-59generic on an x86 64 architecture? Parker should not ask (b) because an answer is unlikely to be useful; they should not ask (c) because it is too specific and an answer like “No” or “I do not know” gives little help. Parker’s question (a) is much better: it is both likely to be useful, and is plausibly answerable by Terry. In this work, we design a model to rank a candidate set of clarification questions by their usefulness to the given post. We imagine a use case (more discussion in §7) in which, while Terry is writing their post, a system suggests a shortlist of questions asking for information that it thinks people like Parker might need to provide a solution, thus enabling Terry to immediately clarify their post, potentially leading to a much quicker resolution. Our model is based on the decision theoretic framework of the Expected Value of Perfect Information (EVPI) (Avriel and Williams, 1970), a measure of the value of gathering additional information. In our setting, we use EVPI to calculate which questions are most likely to elicit an answer that would make the post more informative. 2738 Figure 2: The behavior of our model during test time: Given a post p, we retrieve 10 posts similar to post p using Lucene. The questions asked to those 10 posts are our question candidates Q and the edits made to the posts in response to the questions are our answer candidates A. For each question candidate qi, we generate an answer representation F(p, qi) and calculate how close is the answer candidate aj to our answer representation F(p, qi). We then calculate the utility of the post p if it were updated with the answer aj. Finally, we rank the candidate questions Q by their expected utility given the post p (Eq 1). Our work has two main contributions: 1. A novel neural-network model for addressing the task of ranking clarification question built on the framework of expected value of perfect information (§2). 2. A novel dataset, derived from StackExchange2, that enables us to learn a model to ask clarifying questions by looking at the types of questions people ask (§3). We formulate this task as a ranking problem on a set of potential clarification questions. We evaluate models both on the task of returning the original clarification question and also on the task of picking any of the candidate clarification questions marked as good by experts (§4). We find that our EVPI model outperforms the baseline models when evaluated against expert human annotations. We include a few examples of human annotations along with our model performance on them in the supplementary material. We have released our dataset of ∼77K (p, q, a) triples and the expert annotations on 500 triples to help facilitate further research in this task.3 2 Model description We build a neural network model inspired by the theory of expected value of perfect information (EVPI). EVPI is a measurement of: if I were to acquire information X, how useful would that be to 2We use data from StackExchange; per license cc-by-sa 3.0, the data is “intended to be shared and remixed” (with attribution). 3https://github.com/raosudha89/ ranking_clarification_questions me? However, because we haven’t acquired X yet, we have to take this quantity in expectation over all possible X, weighted by each X’s likelihood. In our setting, for any given question qi that we can ask, there is a set A of possible answers that could be given. For each possible answer aj ∈A, there is some probability of getting that answer, and some utility if that were the answer we got. The value of this question qi is the expected utility, over all possible answers: EVPI(qi|p) = X aj∈A P[aj|p, qi]U(p + aj) (1) In Eq 1, p is the post, qi is a potential question from a set of candidate questions Q and aj is a potential answer from a set of candidate answers A. Here, P[aj|p, qi] measures the probability of getting an answer aj given an initial post p and a clarifying question qi, and U(p + aj) is a utility function that measures how much more complete p would be if it were augmented with answer aj. The modeling question then is how to model: 1. The probability distribution P[aj|p, qi] and 2. The utility function U(p + aj). In our work, we represent both using neural networks over the appropriate inputs. We train the parameters of the two models jointly to minimize a joint loss defined such that an answer that has a higher potential of increasing the utility of a post gets a higher probability. Figure 2 describes the behavior of our model during test time. Given a post p, we generate a set of candidate questions and a set of candidate 2739 Figure 3: Training of our answer generator. Given a post pi and its question qi, we generate an answer representation that is not only close to its original answer ai, but also close to one of its candidate answers aj if the candidate question qj is close to the original question qi. answers (§2.1). Given a post p and a question candidate qi, we calculate how likely is this question to be answered using one of our answer candidates aj (§2.2). Given a post p and an answer candidate aj, we calculate the utility of the updated post i.e. U(p + aj) (§2.3). We compose these modules into a joint neural network that we optimize end-to-end over our data (§2.4). 2.1 Question & answer candidate generator Given a post p, our first step is to generate a set of question and answer candidates. One way that humans learn to ask questions is by looking at how others ask questions in a similar situation. Using this intuition we generate question candidates for a given post by identifying posts similar to the given post and then looking at the questions asked to those posts. For identifying similar posts, we use Lucene4, a software extensively used in information retrieval for extracting documents relevant to a given query from a pool of documents. Lucene implements a variant of the term frequency-inverse document frequency (TF-IDF) model to score the extracted documents according to their relevance to the query. We use Lucene to find the top 10 posts most similar to a given post from our dataset (§3). We consider the questions asked to these 10 posts as our set of question candidates Q and the edits made to the posts in response to the questions as our set of answer candidates A. Since the top-most similar candidate extracted by Lucene is always the original post itself, the original question and answer paired with the post is always one of the candidates in Q and A. §3 describes in detail the process of extracting the 4https://lucene.apache.org/ (post, question, answer) triples from the StackExchange datadump. 2.2 Answer modeling Given a post p and a question candidate qi, our second step is to calculate how likely is this question to be answered using one of our answer candidates aj. We first generate an answer representation by combining the neural representations of the post and the question using a function Fans(¯p, ¯qi) (details in §2.4). Given such a representation, we measure the distance between this answer representation and one of the answer candidates aj using the function below: dist(Fans(¯p, ¯qi), ˆaj) = 1 −cos sim(Fans(¯p, ¯qi), ˆaj) The likelihood of an answer candidate aj being the answer to a question qi on post p is finally calculated by combining this distance with the cosine similarity between the question qi and the question qj paired with the answer candidate aj: P[aj|p, qi] = exp−dist(Fans(¯p, ¯qi), ˆaj) ∗cos sim(ˆqi, ˆqj) (2) where ˆaj, ˆqi and ˆqj are the average word vector of aj, qi and qj respectively (details in §2.4) and cos sim is the cosine similarity between the two input vectors. We model our answer generator using the following intuition: a question can be asked in several different ways. For e.g. in Figure 1, the question “What version of Ubuntu do you have?” can be asked in other ways like “What version of operating system are you using?”, “Version of OS?”, etc. Additionally, for a given post and a question, there can be 2740 several different answers to that question. For instance, “Ubuntu 14.04 LTS”, “Ubuntu 12.0”, “Ubuntu 9.0”, are all valid answers. To generate an answer representation capturing these generalizations, we train our answer generator on our triples dataset (§3) using the loss function below: lossans(pi, qi, ai, Qi) = dist(Fans(¯pi, ¯qi), ˆai) (3) + X j∈Q  dist(Fans(¯pi, ¯qi), ˆaj) ∗cos sim(ˆqi, ˆqj)  where, ˆa and ˆq is the average word vectors of a and q respectively (details in §2.4), cos sim is the cosine similarity between the two input vectors. This loss function can be explained using the example in Figure 3. Question qi is the question paired with the given post pi. In Eq 3, the first term forces the function Fans(¯pi, ¯qi) to generate an answer representation as close as possible to the correct answer ai. Now, a question can be asked in several different ways. Let Qi be the set of candidate questions for post pi, retrieved from the dataset using Lucene (§ 2.1). Suppose a question candidate qj is very similar to the correct question qi ( i.e. cos sim(ˆqi, ˆqj) is near zero). Then the second term forces the answer representation Fans(¯pi, ¯qi) to be close to the answer aj corresponding to the question qj as well. Thus in Figure 3, the answer representation will be close to aj (since qj is similar to qi), but may not be necessarily close to ak (since qk is dissimilar to qi). 2.3 Utility calculator Given a post p and an answer candidate aj, the third step is to calculate the utility of the updated post i.e. U(p+aj). As expressed in Eq 1, this utility function measures how useful it would be if a given post p were augmented with an answer aj paired with a different question qj in the candidate set. Although theoretically, the utility of the updated post can be calculated only using the given post (p) and the candidate answer (aj), empirically we find that our neural EVPI model performs better when the candidate question (qj) paired with the candidate answer is a part of the utility function. We attribute this to the fact that much information about whether an answer increases the utility of a post is also contained in the question asked to the post. We train our utility calculator using our dataset of (p, q, a) triples (§3). We label all the (pi, qi, ai) pairs from our triples dataset with label y = 1. To get negative samples, we make use of the answer candidates generated using Lucene as described in §2.1. For each aj ∈Ai, where Ai is the set of answer candidates for post pi, we label the pair (pi, qj, aj) with label y = 0, except for when aj = ai. Thus, for each post pi in our triples dataset, we have one positive sample and nine negative samples. It should be noted that this is a noisy labelling scheme since a question not paired with the original question in our dataset can often times be a good question to ask to the post (§4). However, since we do not have annotations for such other good questions at train time, we assume such a labelling. Given a post pi and an answer aj paired with the question qj, we combine their neural representations using a function Futil( ¯pi, ¯qj, ¯aj) (details in §2.4). The utility of the updated post is then defined as U(pi + aj) = σ(Futil( ¯pi, ¯qj, ¯aj))5. We want this utility to be close to 1 for all the positively labelled (p, q, a) triples and close to 0 for all the negatively labelled (p, q, a) triples. We therefore define our loss using the binary cross-entropy formulation below: lossutil(yi, ¯pi, ¯qj, ¯aj) = yi log(σ(Futil( ¯pi, ¯qj, ¯aj))) (4) 2.4 Our joint neural network model Our fundamental representation is based on recurrent neural networks over word embeddings. We obtain the word embeddings using the GloVe (Pennington et al., 2014) model trained on the entire datadump of StackExchange.6. In Eq 2 and Eq 3, the average word vector representations ˆq and ˆa are obtained by averaging the GloVe word embeddings for all words in the question and the answer respectively. Given an initial post p, we generate a post neural representation ¯p using a post LSTM (long short-term memory architecture) (Hochreiter and Schmidhuber, 1997). The input layer consists of word embeddings of the words in the post which is fed into a single hidden layer. The output of each of the hidden states is averaged together to get our neural representation ¯p. Similarly, given a question q and an answer a, we generate the neural representations ¯q and ¯a using a question LSTM and an answer LSTM respectively. We define the function Fans in our answer model as a feedforward neural network with five hidden layers on the inputs ¯p and ¯q. Likewise, we 5σ is the sigmoid function. 6Details in the supplementary material. 2741 define the function Futil in our utility calculator as a feedforward neural network with five hidden layers on the inputs ¯p, ¯q and ¯a. We train the parameters of the three LSTMs corresponding to p, q and a, and the parameters of the two feedforward neural networks jointly to minimize the sum of the loss of our answer model (Eq 3) and our utility calculator (Eq 4) over our entire dataset: X i X j lossans(¯pi, ¯qi, ¯ai, Qi) + lossutil(yi, ¯pi, ¯qj, ¯aj) (5) Given such an estimate P[aj|p, qi] of an answer and a utility U(p + aj) of the updated post, we rank the candidate questions by their value as calculated using Eq 1. The remaining question, then, is how to get data that enables us to train our answer model and our utility calculator. Given data, the training becomes a multitask learning problem, where we learn simultaneously to predict utility and to estimate the probability of answers. 3 Dataset creation StackExchange is a network of online question answering websites about varied topics like academia, ubuntu operating system, latex, etc. The data dump of StackExchange contains timestamped information about the posts, comments on the post and the history of the revisions made to the post. We use this data dump to create our dataset of (post, question, answer) triples: where the post is the initial unedited post, the question is the comment containing a question and the answer is either the edit made to the post after the question or the author’s response to the question in the comments section. Extract posts: We use the post histories to identify posts that have been updated by its author. We use the timestamp information to retrieve the initial unedited version of the post. Extract questions: For each such initial version of the post, we use the timestamp information of its comments to identify the first question comment made to the post. We truncate the comment till its question mark ’?’ to retrieve the question part of the comment. We find that about 7% of these are rhetoric questions that indirectly suggest a solution to the post. For e.g. “have you considered installing X?”. We do a manual analysis of Train Tune Test askubuntu 19,944 2493 2493 unix 10,882 1360 1360 superuser 30,852 3857 3856 Table 1: Table above shows the sizes of the train, tune and test split of our dataset for three domains. these non-clarification questions and hand-crafted a few rules to remove them. 7 Extract answers: We extract the answer to a clarification question in the following two ways: (a) Edited post: Authors tend to respond to a clarification question by editing their original post and adding the missing information. In order to account for edits made for other reasons like stylistic updates and grammatical corrections, we consider only those edits that are longer than four words. Authors can make multiple edits to a post in response to multiple clarification questions.8 To identify the edit made corresponding to the given question comment, we choose the edit closest in time following the question. (b) Response to the question: Authors also respond to clarification questions as subsequent comments in the comment section. We extract the first comment by the author following the clarification question as the answer to the question. In cases where both the methods above yield an answer, we pick the one that is the most semantically similar to the question, where the measure of similarity is the cosine distance between the average word embeddings of the question and the answer. We extract a total of 77,097 (post, question, answer) triples across three domains in StackExchange (Table 1). We will release this dataset along with the the nine question and answer candidates per triple that we generate using lucene (§ 2.1). We include an analysis of our dataset in the supplementary material. 4 Evaluation design We define our task as given a post p, and a set of candidate clarification questions Q, rank the questions according to their usefulness to the post. 7Details in the supplementary material. 8On analysis, we find that 35%-40% of the posts get asked multiple clarification questions. We include only the first clarification question to a post in our dataset since identifying if the following questions are clarifications or a part of a dialogue is non-trivial. 2742 Since the candidate set includes the original question q that was asked to the post p, one possible approach to evaluation would be to look at how often the original question is ranked higher up in the ranking predicted by a model. However, there are two problems to this approach: 1) Our dataset creation process is noisy. The original question paired with the post may not be a useful question. For e.g. “are you seriously asking this question?”, “do you mind making that an answer?”9. 2) The nine other questions in the candidate set are obtained by looking at questions asked to posts that are similar to the given post.10 This greatly increases the possibility of some other question(s) being more useful than the original question paired with the post. This motivates an evaluation design that does not rely solely on the original question but also uses human judgments. We randomly choose a total of 500 examples from the test sets of the three domains proportional to their train set sizes (askubuntu:160, unix:90 and superuser:250) to construct our evaluation set. 4.1 Annotation scheme Due to the technical nature of the posts in our dataset, identifying useful questions requires technical experts. We recruit 10 such experts on Upwork11 who have prior experience in unix based operating system administration.12 We provide the annotators with a post and a randomized list of the ten question candidates obtained using Lucene (§2.1) and ask them to select a single “best” (B) question to ask, and additionally mark as “valid” (V ) other questions that they thought would be okay to ask in the context of the original post. We enforce that the “best” question be always marked as a “valid” question. We group the 10 annotators into 5 pairs and assign the same 100 examples to the two annotators in a pair. 4.2 Annotation analysis We calculate the inter-annotator agreement on the “best” and the “valid” annotations using Cohen’s Kappa measurement. When calculating the agreement on the “best” in the strict sense, we get a low 9Data analysis included in the supplementary material suggests 9% of the questions are not useful. 10Note that this setting is different from the distractorbased setting popularly used in dialogue (Lowe et al., 2015) where the distractor candidates are chosen randomly from the corpus. 11https://upwork.com 12Details in the supplementary material. Figure 4: Distribution of the count of questions in the intersection of the “valid” annotations. agreement of 0.15. However, when we relax this to a case where the question marked as“best” by one annotator is marked as “valid” by another, we get an agreement of 0.87. The agreement on the “valid” annotations, on the other hand, was higher: 0.58. We calculate this agreement on the binary judgment of whether a question was marked as valid by the annotator. Given these annotations, we calculate how often is the original question marked as “best” or “valid” by the two annotators. We find that 72% of the time one of the annotators mark the original as the “best”, whereas only 20% of the time both annotators mark it as the “best” suggesting against an evaluation solely based on the original question. On the other hand, 88% of the time one of the two annotators mark it as a “valid” question confirming the noise in our training data.13 Figure 4 shows the distribution of the counts of questions in the intersection of “valid” annotations (blue legend). We see that about 85% of the posts have more than 2 valid questions and 50% have more than 3 valid questions. The figure also shows the distribution of the counts when the original question is removed from the intersection (red legend). Even in this set, we find that about 60% of the posts have more than two valid questions. These numbers suggests that the candidate set of questions retrieved using Lucene (§2.1) very often contains useful clarification questions. 5 Experimental results Our primary research questions that we evaluate experimentally are: 1. Does a neural network architecture improve upon non-neural baselines? 1376% of the time both the annotators mark it as a “valid”. 2743 B1 ∪B2 V 1 ∩V 2 Original Model p@1 p@3 p@5 MAP p@1 p@3 p@5 MAP p@1 Random 17.5 17.5 17.5 35.2 26.4 26.4 26.4 42.1 10.0 Bag-of-ngrams 19.4 19.4 18.7 34.4 25.6 27.6 27.5 42.7 10.7 Community QA 23.1 21.2 20.0 40.2 33.6 30.8 29.1 47.0 18.5 Neural (p, q) 21.9 20.9 19.5 39.2 31.6 30.0 28.9 45.5 15.4 Neural (p, a) 24.1 23.5 20.6 41.4 32.3 31.5 29.0 46.5 18.8 Neural (p, q, a) 25.2 22.7 21.3 42.5 34.4 31.8 30.1 47.7 20.5 EVPI 27.7 23.4 21.5 43.6 36.1 32.2 30.5 49.2 21.4 Table 2: Model performances on 500 samples when evaluated against the union of the “best” annotations (B1 ∪B2), intersection of the “valid” annotations (V 1 ∩V 2) and the original question paired with the post in the dataset. The difference between the bold and the non-bold numbers is statistically significant with p < 0.05 as calculated using bootstrap test. p@k is the precision of the k questions ranked highest by the model and MAP is the mean average precision of the ranking predicted by the model. 2. Does the EVPI formalism provide leverage over a similarly expressive feedforward network? 3. Are answers useful in identifying the right question? 4. How do the models perform when evaluated on the candidate questions excluding the original? 5.1 Baseline methods We compare our model with following baselines: Random: Given a post, we randomly permute its set of 10 candidate questions uniformly.14 Bag-of-ngrams: Given a post and a set of 10 question and answer candidates, we construct a bag-of-ngrams representation for the post, question and answer. We train the baseline on all the positive and negative candidate triples (same as in our utility calculator (§2.3)) to minimize hinge loss on misclassification error using cross-product features between each of (p, q), (q, a) and (p, a). We tune the ngram length and choose n=3 which performs best on the tune set. The question candidates are finally ranked according to their predictions for the positive label. Community QA: The recent SemEval2017 Community Question-Answering (CQA) (Nakov et al., 2017) included a subtask for ranking a set of comments according to their relevance to a given post in the Qatar Living15 forum. Nandi et al. (2017), winners of this subtask, developed a logistic regression model using features based on 14We take the average over 1000 random permutations. 15http://www.qatarliving.com/forum string similarity, word embeddings, etc. We train this model on all the positively and negatively labelled (p, q) pairs in our dataset (same as in our utility calculator (§2.3), but without a). We use a subset of their features relevant to our task.16 Neural baselines: We construct the following neural baselines based on the LSTM representation of their inputs (as described in §2.4): 1. Neural(p, q): Input is concatenation of ¯p and ¯q. 2. Neural(p, a): Input is concatenation of ¯p and ¯a. 3. Neural(p, q, a): Input is concatenation of ¯p, ¯q and ¯a. Given these inputs, we construct a fully connected feedforward neural network with 10 hidden layers and train it to minimize the binary cross entropy across all positive and negative candidate triples (same as in our utility calculator (§ 2.3)). The major difference between the neural baselines and our EVPI model is in the loss function: the EVPI model is trained to minimize the joint loss between the answer model (defined on Fans(p, q) in Eq 3) and the utility calculator (defined on Futil(p, q, a) in Eq 4) whereas the neural baselines are trained to minimize the loss directly on F(p, q), F(p, a) or F(p, q, a). We include the implementation details of all our neural models in the supplementary material. 5.2 Results 5.2.1 Evaluating against expert annotations We first describe the results of the different models when evaluated against the expert annotations we collect on 500 samples (§4). Since the annotators 16Details in the supplementary material. 2744 had a low agreement on a single best, we evaluate against the union of the “best” annotations (B1 ∪ B2 in Table 2) and against the intersection of the “valid” annotations (V 1 ∩V 2 in Table 2). Among non-neural baselines, we find that the bag-of-ngrams baseline performs slightly better than random but worse than all the other models. The Community QA baseline, on the other hand, performs better than the neural baseline (Neural (p, q)), both of which are trained without using the answers. The neural baselines with answers (Neural(p, q, a) and Neural(p, a)) outperform the neural baseline without answers (Neural(p, q)), showing that answer helps in selecting the right question. More importantly, EVPI outperforms the Neural (p, q, a) baseline across most metrics. Both models use the same information regarding the true question and answer and are trained using the same number of model parameters.17 However, the EVPI model, unlike the neural baseline, additionally makes use of alternate question and answer candidates to compute its loss function. This shows that when the candidate set consists of questions similar to the original question, summing over their utilities gives us a boost. 5.2.2 Evaluating against the original question The last column in Table 2 shows the results when evaluated against the original question paired with the post. The bag-of-ngrams baseline performs similar to random, unlike when evaluated against human judgments. The Community QA baseline again outperforms Neural(p, q) model and comes very close to the Neural (p, a) model. As before, the neural baselines that make use of the answer outperform the one that does not use the answer and the EVPI model performs significantly better than Neural(p, q, a). 5.2.3 Excluding the original question In the preceding analysis, we considered a setting in which the “ground truth” original question was in the candidate set Q. While this is a common evaluation framework in dialog response selection (Lowe et al., 2015), it is overly optimistic. We, therefore, evaluate against the “best” and the “valid” annotations on the nine other question candidates. We find that the neural models beat the 17We use 10 hidden layers in the feedforward network of the neural baseline and five hidden layers each in the two feedforward networks Fans and Futil of the EVPI model. non-neural baselines. However, the differences between all the neural models are statistically insignificant.18 6 Related work Most prior work on question generation has focused on generating reading comprehension questions: given text, write questions that one might find on a standardized test (Vanderwende, 2008; Heilman, 2011; Rus et al., 2011; Olney et al., 2012). Comprehension questions, by definition, are answerable from the provided text. Clarification questions–our interest–are not. Outside reading comprehension questions, Labutov et al. (2015) generate high-level question templates by crowdsourcing which leads to significantly less data than we collect using our method. Liu et al. (2010) use template question generation to help authors write better related work sections. Mostafazadeh et al. (2016) introduce a Visual Question Generation task where the goal is to generate natural questions that are not about what is present in the image rather about what can be inferred given the image, somewhat analogous to clarification questions. Penas and Hovy (2010) identify the notion of missing information similar to us, but they fill the knowledge gaps in a text with the help of external knowledge bases, whereas we instead ask clarification questions. Artzi and Zettlemoyer (2011) use human-generated clarification questions to drive a semantic parser where the clarification questions are aimed towards simplifying a user query; whereas we generate clarification questions aimed at identifying missing information in a text. Among works that use community question answer forums, the keywords to questions (K2Q) system (Zheng et al., 2011) generates a list of candidate questions and refinement words, given a set of input keywords, to help a user ask a better question. Figueroa and Neumann (2013) rank different paraphrases of query for effective search on forums. (Romeo et al., 2016) develop a neural network based model for ranking questions on forums with the intent of retrieving similar other question. The recent SemEval-2017 Community QuestionAnswering (CQA) (Nakov et al., 2017) task included a subtask to rank the comments according to their relevance to the post. Our task primarily differs from this task in that we want to identify a 18Results included in the supplementary material. 2745 question comment which is not only relevant to the post but will also elicit useful information missing from the post. Hoogeveen et al. (2015) created the CQADupStack dataset using StackExchange forums for the task of duplicate question retrieval. Our dataset, on the other hand, is designed for the task of ranking clarification questions asked as comments to a post. 7 Conclusion We have constructed a new dataset for learning to rank clarification questions, and proposed a novel model for solving this task. Our model integrates well-known deep network architectures with the classic notion of expected value of perfect information, which effectively models a pragmatic choice on the part of the questioner: how do I imagine the other party would answer if I were to ask this question. Such pragmatic principles have recently been shown to be useful in other tasks as well (Golland et al., 2010; Smith et al., 2013; Orita et al., 2015; Andreas and Klein, 2016). One can naturally extend our EVPI approach to a full reinforcement learning approach to handle multi-turn conversations. Our results shows that the EVPI model is a promising formalism for the question generation task. In order to move to a full system that can help users like Terry write better posts, there are three interesting lines of future work. First, we need it to be able to generalize: for instance by constructing templates of the form “What version of are you running?” into which the system would need to fill a variable. Second, in order to move from question ranking to question generation, one could consider sequence-to-sequence based neural network models that have recently proven to be effective for several language generation tasks (Sutskever et al., 2014; Serban et al., 2016; Yin et al., 2016). Third is in evaluation: given that this task requires expert human annotations and also given that there are multiple possible good questions to ask, how can we automatically measure performance at this task?, a question faced in dialog and generation more broadly (Paek, 2001; Lowe et al., 2015; Liu et al., 2016). Acknowledgments The authors thank the three anonymous reviewers of this paper, and the anonymous reviewers of the previous versions for their helpful comments and suggestions. They also thank the members of the Computational Linguistics and Information Processing (CLIP) lab at University of Maryland for helpful discussions. This work was supported by NSF grant IIS1618193. Any opinions, findings, conclusions, or recommendations expressed here are those of the authors and do not necessarily reflect the view of the sponsors. References Jacob Andreas and Dan Klein. 2016. Reasoning about pragmatics with neural listeners and speakers. In Proceedings of the 2016 Conference on Empirical Methods in Natural Language Processing. pages 1173–1182. Yoav Artzi and Luke Zettlemoyer. 2011. Bootstrapping semantic parsers from conversations. In Proceedings of the conference on empirical methods in natural language processing. Association for Computational Linguistics, pages 421–432. Mordecai Avriel and AC Williams. 1970. The value of information and stochastic programming. Operations Research 18(5):947–954. Alejandro Figueroa and G¨unter Neumann. 2013. Learning to rank effective paraphrases from query logs for community question answering. In AAAI. volume 13, pages 1099–1105. Dave Golland, Percy Liang, and Dan Klein. 2010. A game-theoretic approach to generating spatial descriptions. In Proceedings of the 2010 conference on empirical methods in natural language processing. Association for Computational Linguistics, pages 410–419. Michael Heilman. 2011. Automatic factual question generation from text. Ph.D. thesis, Carnegie Mellon University. Sepp Hochreiter and J¨urgen Schmidhuber. 1997. Long short-term memory. Neural computation 9(8):1735–1780. Doris Hoogeveen, Karin M Verspoor, and Timothy Baldwin. 2015. Cqadupstack: A benchmark data set for community question-answering research. In Proceedings of the 20th Australasian Document Computing Symposium. ACM, page 3. Igor Labutov, Sumit Basu, and Lucy Vanderwende. 2015. Deep questions without deep understanding. In ACL (1). pages 889–898. Chia-Wei Liu, Ryan Lowe, Iulian Serban, Mike Noseworthy, Laurent Charlin, and Joelle Pineau. 2016. How not to evaluate your dialogue system: An empirical study of unsupervised evaluation metrics for dialogue response generation. In Proceedings of the 2746 2016 Conference on Empirical Methods in Natural Language Processing. pages 2122–2132. Ming Liu, Rafael A Calvo, and Vasile Rus. 2010. Automatic question generation for literature review writing support. In International Conference on Intelligent Tutoring Systems. Springer, pages 45–54. Ryan Lowe, Nissan Pow, Iulian V Serban, and Joelle Pineau. 2015. The ubuntu dialogue corpus: A large dataset for research in unstructured multi-turn dialogue systems. In 16th Annual Meeting of the Special Interest Group on Discourse and Dialogue. page 285. Nasrin Mostafazadeh, Ishan Misra, Jacob Devlin, Margaret Mitchell, Xiaodong He, and Lucy Vanderwende. 2016. Generating natural questions about an image. In Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers). volume 1, pages 1802– 1813. Preslav Nakov, Doris Hoogeveen, Llu´ıs M`arquez, Alessandro Moschitti, Hamdy Mubarak, Timothy Baldwin, and Karin Verspoor. 2017. Semeval-2017 task 3: Community question answering. In Proceedings of the 11th International Workshop on Semantic Evaluation (SemEval-2017). pages 27–48. Titas Nandi, Chris Biemann, Seid Muhie Yimam, Deepak Gupta, Sarah Kohail, Asif Ekbal, and Pushpak Bhattacharyya. 2017. Iit-uhh at semeval-2017 task 3: Exploring multiple features for community question answering and implicit dialogue identification. In Proceedings of the 11th International Workshop on Semantic Evaluation (SemEval-2017). pages 90–97. Andrew McGregor Olney, Arthur C Graesser, and Natalie K Person. 2012. Question generation from concept maps. D&D 3(2):75–99. Naho Orita, Eliana Vornov, Naomi Feldman, and Hal Daum´e III. 2015. Why discourse affects speakers’ choice of referring expressions. In ACL (1). pages 1639–1649. Tim Paek. 2001. Empirical methods for evaluating dialog systems. In Proceedings of the workshop on Evaluation for Language and Dialogue SystemsVolume 9. Association for Computational Linguistics, page 2. Anselmo Penas and Eduard Hovy. 2010. Filling knowledge gaps in text for machine reading. In Proceedings of the 23rd International Conference on Computational Linguistics: Posters. Association for Computational Linguistics, pages 979–987. Jeffrey Pennington, Richard Socher, and Christopher Manning. 2014. Glove: Global vectors for word representation. In Proceedings of the 2014 conference on empirical methods in natural language processing (EMNLP). pages 1532–1543. Salvatore Romeo, Giovanni Da San Martino, Alberto Barr´on-Cedeno, Alessandro Moschitti, Yonatan Belinkov, Wei-Ning Hsu, Yu Zhang, Mitra Mohtarami, and James Glass. 2016. Neural attention for learning to rank questions in community question answering. In Proceedings of COLING 2016, the 26th International Conference on Computational Linguistics: Technical Papers. pages 1734–1745. Vasile Rus, Paul Piwek, Svetlana Stoyanchev, Brendan Wyse, Mihai Lintean, and Cristian Moldovan. 2011. Question generation shared task and evaluation challenge: Status report. In Proceedings of the 13th European Workshop on Natural Language Generation. Association for Computational Linguistics, pages 318–320. Iulian Vlad Serban, Alessandro Sordoni, Yoshua Bengio, Aaron C Courville, and Joelle Pineau. 2016. Building end-to-end dialogue systems using generative hierarchical neural network models. In AAAI. pages 3776–3784. Nathaniel J Smith, Noah Goodman, and Michael Frank. 2013. Learning and using language via recursive pragmatic reasoning about other agents. In Advances in neural information processing systems. pages 3039–3047. Ilya Sutskever, Oriol Vinyals, and Quoc V Le. 2014. Sequence to sequence learning with neural networks. In Advances in neural information processing systems. pages 3104–3112. Lucy Vanderwende. 2008. The importance of being important: Question generation. In Proceedings of the 1st Workshop on the Question Generation Shared Task Evaluation Challenge, Arlington, VA. Jun Yin, Xin Jiang, Zhengdong Lu, Lifeng Shang, Hang Li, and Xiaoming Li. 2016. Neural generative question answering. In Proceedings of the TwentyFifth International Joint Conference on Artificial Intelligence. AAAI Press, pages 2972–2978. Zhicheng Zheng, Xiance Si, Edward Chang, and Xiaoyan Zhu. 2011. K2q: Generating natural language questions from keywords with user refinements. In Proceedings of 5th International Joint Conference on Natural Language Processing. pages 947–955.
2018
255
Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Long Papers), pages 2747–2755 Melbourne, Australia, July 15 - 20, 2018. c⃝2018 Association for Computational Linguistics 2747 Let’s do it “again”: A First Computational Approach to Detecting Adverbial Presupposition Triggers Andre Cianflone ∗ Yulan Feng ∗ Jad Kabbara ∗ Jackie Chi Kit Cheung School of Computer Science MILA McGill University Montreal, QC, Canada Montreal, QC, Canada {andre.cianflone@mail,yulan.feng@mail,jad@cs,jcheung@cs}.mcgill.ca Abstract We introduce the task of predicting adverbial presupposition triggers such as also and again. Solving such a task requires detecting recurring or similar events in the discourse context, and has applications in natural language generation tasks such as summarization and dialogue systems. We create two new datasets for the task, derived from the Penn Treebank and the Annotated English Gigaword corpora, as well as a novel attention mechanism tailored to this task. Our attention mechanism augments a baseline recurrent neural network without the need for additional trainable parameters, minimizing the added computational cost of our mechanism. We demonstrate that our model statistically outperforms a number of baselines, including an LSTM-based language model. 1 Introduction In pragmatics, presuppositions are assumptions or beliefs in the common ground between discourse participants when an utterance is made (Frege, 1892; Strawson, 1950; Stalnaker, 1973, 1998), and are ubiquitous in naturally occurring discourses (Beaver and Geurts, 2014). Presuppositions underly spoken statements and written sentences and understanding them facilitates smooth communication. We refer to expressions that indicate the presence of presuppositions as presupposition triggers. These include definite descriptions, factive verbs and certain adverbs, among others. For example, consider the following statements: (1) John is going to the restaurant again. ∗Authors (listed in alphabetical order) contributed equally. (2) John has been to the restaurant. (1) is only appropriate in the context where (2) is held to be true because of the presence of the presupposition trigger again. One distinguishing characteristic of presupposition is that it is unaffected by negation of the presupposing context, unlike other semantic phenomena such as entailment and implicature. The negation of (1), John is not going to the restaurant again., also presupposes (2). Our focus in this paper is on adverbial presupposition triggers such as again, also and still. Adverbial presupposition triggers indicate the recurrence, continuation, or termination of an event in the discourse context, or the presence of a similar event. In one study of presuppositional triggers in English journalistic texts (Khaleel, 2010), adverbial triggers were found to be the most commonly occurring presupposition triggers after existential triggers.1 Despite their frequency, there has been little work on these triggers in the computational literature from a statistical, corpus-driven perspective. As a first step towards language technology systems capable of understanding and using presuppositions, we propose to investigate the detection of contexts in which these triggers can be used. This task constitutes an interesting testing ground for pragmatic reasoning, because the cues that are indicative of contexts containing recurring or similar events are complex and often span more than one sentence, as illustrated in Sentences (1) and (2). Moreover, such a task has immediate practical consequences. For example, in language generation applications such as summarization and dialogue systems, adding presuppositional triggers in contextually appropriate loca1Presupposition of existence are triggered by possessive constructions, names or definite noun phrases. 2748 tions can improve the readability and coherence of the generated output. We create two datasets based on the Penn Treebank corpus (Marcus et al., 1993) and the English Gigaword corpus (Graff et al., 2007), extracting contexts that include presupposition triggers as well as other similar contexts that do not, in order to form a binary classification task. In creating our datasets, we consider a set of five target adverbs: too, again, also, still, and yet. We focus on these adverbs in our investigation because these triggers are well known in the existing linguistic literature and commonly triggering presuppositions. We control for a number of potential confounding factors, such as class balance, and the syntactic governor of the triggering adverb, so that models cannot exploit these correlating factors without any actual understanding of the presuppositional properties of the context. We test a number of standard baseline classifiers on these datasets, including a logistic regression model and deep learning methods based on recurrent neural networks (RNN) and convolutional neural networks (CNN). In addition, we investigate the potential of attention-based deep learning models for detecting adverbial triggers. Attention is a promising approach to this task because it allows a model to weigh information from multiple points in the previous context and infer long-range dependencies in the data (Bahdanau et al., 2015). For example, the model could learn to detect multiple instances involving John and restaurants, which would be a good indication that again is appropriate in that context. Also, an attention-based RNN has achieved success in predicting article definiteness, which involves another class of presupposition triggers (Kabbara et al., 2016). As another contribution, we introduce a new weighted pooling attention mechanism designed for predicting adverbial presupposition triggers. Our attention mechanism allows for a weighted averaging of our RNN hidden states where the weights are informed by the inputs, as opposed to a simple unweighted averaging. Our model uses a form of self-attention (Paulus et al., 2018; Vaswani et al., 2017), where the input sequence acts as both the attention mechanism’s query and key/value. Unlike other attention models, instead of simply averaging the scores to be weighted, our approach aggregates (learned) attention scores by learning a reweighting scheme of those scores through another level (dimension) of attention. Additionally, our mechanism does not introduce any new parameters when compared to our LSTM baseline, reducing its computational impact. We compare our model using the novel attention mechanism against the baseline classifiers in terms of prediction accuracy. Our model outperforms these baselines for most of the triggers on the two datasets, achieving 82.42% accuracy on predicting the adverb “also” on the Gigaword dataset. The contributions of this work are as follows: 1. We introduce the task of predicting adverbial presupposition triggers. 2. We present new datasets for the task of detecting adverbial presupposition triggers, with a data extraction method that can be applied to other similar pre-processing tasks. 3. We develop a new attention mechanism in an RNN architecture that is appropriate for the prediction of adverbial presupposition triggers, and show that its use results in better prediction performance over a number of baselines without introducing additional parameters. 2 Related Work 2.1 Presupposition and pragmatic reasoning The discussion of presupposition can be traced back to Frege’s work on the philosophy of language (Frege, 1892), which later leads to the most commonly accepted view of presupposition called the Frege-Strawson theory (Kaplan, 1970; Strawson, 1950). In this view, presuppositions are preconditions for sentences/statements to be true or false. To the best of our knowledge, there is no previous computational work that directly investigates adverbial presupposition. However in the fields of semantics and pragmatics, there exist linguistic studies on presupposition that involve adverbs such as “too” and “again” (e.g., (Blutner et al., 2003), (Kang, 2012)) as a pragmatic presupposition trigger. Also relevant to our work is (Kabbara et al., 2016), which proposes using an attention-based LSTM network to predict noun phrase definiteness in English. Their work demonstrates the ability of these attention-based models to pick up on contextual cues for pragmatic reasoning. 2749 Many different classes of construction can trigger presupposition in an utterance, this includes but is not limited to stressed constituents, factive verbs, and implicative verbs (Zare et al., 2012). In this work, we focus on the class of adverbial presupposition triggers. Our task setup resembles the Cloze test used in psychology (Taylor, 1953; E. B. Coleman, 1968; Earl F. Rankin, 1969) and machine comprehension (Riloff and Thelen, 2000), which tests text comprehension via a fill-in-the-blanks task. We similarly pre-process our samples such that they are roughly the same length, and have equal numbers of negative samples as positive ones. However, we avoid replacing the deleted words with a blank, so that our model has no clue regarding the exact position of the possibly missing trigger. Another related work on the Children’s Book Test (Hill et al., 2015) notes that memories that encode sub-sentential chunks (windows) of informative text seem to be most useful to neural networks when interpreting and modelling language. Their finding inspires us to run initial experiments with different context windows and tune the size of chunks according to the Logistic Regression results on the development set. 2.2 Attention In the context of encoder-decoder models, attention weights are usually based on an energy measure of the previous decoder hidden state and encoder hidden states. Many variations on attention computation exist. Sukhbaatar et al. (2015) propose an attention mechanism conditioned on a query and applied to a document. To generate summaries, Paulus et al. (2018) add an attention mechanism in the prediction layer, as opposed to the hidden states. Vaswani et al. (2017) suggest a model which learns an input representation by self-attending over inputs. While these methods are all tailored to their specific tasks, they all inspire our choice of a self-attending mechanism. 3 Datasets 3.1 Corpora We extract datasets from two corpora, namely the Penn Treebank (PTB) corpus (Marcus et al., 1993) and a subset (sections 000-760) of the third edition of the English Gigaword corpus (Graff et al., 2007). For the PTB dataset, we use sections 22 and 23 for testing. For the Gigaword corpus, we (’still’, [’The’, ’Old’, ’Granary’, .../* 46 to kens o m i t t e d */...,’has’, ’@@@@’, ’included’, ’Bertrand’, ’Russell’, .../* 6 t oken s o m i t t e d */... ’Morris ’], [’DT’, ’NNP’, ’NNP’, .../* 46 tok ens o m i t t e d */..., ’VBZ’, ’@@@@’, ’VBN’, ’NNP’, ’NNP’, .../* 6 t oken s o m i t t e d */... ’NNP’]) Figure 1: An example of an instance containing a presuppositional trigger from our dataset. use sections 700-760 for testing. For the remaining data, we randomly chose 10% of them for development, and the other 90% for training. For each dataset, we consider a set of five target adverbs: too, again, also, still, and yet. We choose these five because they are commonly used adverbs that trigger presupposition. Since we are concerned with investigating the capacity of attentional deep neural networks in predicting the presuppositional effects in general, we frame the learning problem as a binary classification for predicting the presence of an adverbial presupposition (as opposed to the identity of the adverb). On the Gigaword corpus, we consider each adverb separately, resulting in five binary classification tasks. This was not feasible for PTB because of its small size. Finally, because of the commonalities between the adverbs in presupposing similar events, we create a dataset that unifies all instances of the five adverbs found in the Gigaword corpus, with a label “1” indicating the presence of any of these adverbs. 3.2 Data extraction process We define a sample in our dataset as a 3-tuple, consisting of a label (representing the target adverb, or ‘none’ for a negative sample), a list of tokens we extract (before/after the adverb), and a list of corresponding POS tags (Klein and Manning, 2002). In each sample, we also add a special token “@@@@” right before the head word and the corresponding POS tag of the head word, both in positive and negative cases. We add such special tokens to identify the candidate context in the passage to the model. Figure 1 shows a single positive sample in our dataset. We first extract positive contexts that contain a triggering adverb, then extract negative contexts 2750 that do not, controlling for a number of potential confounds. Our positive data consist of cases where the target adverb triggers presupposition by modifying a certain head word which, in most cases, is a verb. We define such head word as a governor of the target adverb. When extracting positive data, we scan through all the documents, searching for target adverbs. For each occurrence of a target adverb, we store the location and the governor of the adverb. Taking each occurrence of a governor as a pivot, we extract the 50 unlemmatized tokens preceding it, together with the tokens right after it up to the end of the sentence (where the adverb is)–with the adverb itself being removed. If there are less than 50 tokens before the adverb, we simply extract all of these tokens. In preliminary testing using a logistic regression classifier, we found that limiting the size to 50 tokens had higher accuracy than 25 or 100 tokens. As some head words themselves are stopwords, in the list of tokens, we do not remove any stopwords from the sample; otherwise, we would lose many important samples. We filter out the governors of “too" that have POS tags “JJ” and “RB” (adjectives and adverbs), because such cases corresponds to a different sense of “too” which indicates excess quantity and does not trigger presupposition (e.g., “rely too heavily on”, “it’s too far from”). After extracting the positive cases, we then use the governor information of positive cases to extract negative data. In particular, we extract sentences containing the same governors but not any of the target adverbs as negatives. In this way, models cannot rely on the identity of the governor alone to predict the class. This procedure also roughly balances the number of samples in the positive and negative classes. For each governor in a positive sample, we locate a corresponding context in the corpus where the governor occurs without being modified by any of the target adverbs. We then extract the surrounding tokens in the same fashion as above. Moreover, we try to control positionrelated confounding factors by two randomization approaches: 1) randomize the order of documents to be scanned, and 2) within each document, start scanning from a random location in the document. Note that the number of negative cases might not be exactly equal to the number of negative cases in all datasets because some governors appearing in positive cases are rare words, and we’re unable to find any (or only few) occurrences that match them for the negative cases. 4 Learning Model In this section, we introduce our attention-based model. At a high level, our model extends a bidirectional LSTM model by computing correlations between the hidden states at each timestep, then applying an attention mechanism over these correlations. Our proposed weighted-pooling (WP) neural network architecture is shown in Figure 2. The input sequence u = {u1, u2, . . . , uT } consists of a sequence, of time length T, of onehot encoded word tokens, where the original tokens are those such as in Listing 1. Each token ut is embedded with pretrained embedding matrix We ∈R|V |×d, where |V | corresponds to the number of tokens in vocabulary V , and d defines the size of the word embeddings. The embedded token vector xt ∈Rd is retrieved simply with xt = utWe. Optionally, xt may also include the token’s POS tag. In such instances, the embedded token at time step t is concatenated with the POS tag’s one-hot encoding pt: xt = utWe||pt, where || denotes the vector concatenation operator. At each input time step t, a bi-directional LSTM (Hochreiter and Schmidhuber, 1997) encodes xt into hidden state ht ∈Rs: ht = h−→ ht||←− ht i (1) where −→ ht = f(xt, ht−1) is computed by the forward LSTM, and ←− ht = f(xt, ht+1) is computed by the backward LSTM. Concatenated vector ht is of size 2s, where s is a hyperparameter determining the size of the LSTM hidden states. Let matrix H ∈R2s×T correspond to the concatenation of all hidden state vectors: H = [h1||h2|| . . . ||hT ]. (2) Our model uses a form of self-attention (Paulus et al., 2018; Vaswani et al., 2017), where the input sequence acts as both the attention mechanism’s query and key/value. Since the location of a presupposition trigger can greatly vary from one sample to another, and because dependencies can be long range or short range, we model all possible word-pair interactions within a sequence. We calculate the energy between all input tokens with a 2751 Training set Test set Corpus Positive Negative Total Positive Negative Total PTB 2,596 2,579 5,175 249 233 482 Gigaword yet 32,024 31,819 63,843 7950 7890 15840 Gigaword too 55,827 29,918 85,745 13987 7514 21501 Gigaword again 43,120 42,824 85,944 10935 10827 21762 Gigaword still 97,670 96,991 194,661 24509 24232 48741 Gigaword also 269,778 267,851 537,626 66878 66050 132928 Gigaword all 498,415 491,173 989,588 124255 123078 247333 Table 1: Number of training samples in each dataset. Figure 2: Our weighted-pooling neural network architecture (WP). The tokenized input is embedded with pretrained word embeddings and possibly concatenated with one-hot encoded POS tags. The input is then encoded with a bi-directional LSTM, followed by our attention mechanism. The computed attention scores are then used as weights to average the encoded states, in turn connected to a fully connected layer to predict presupposition triggering. pair-wise matching matrix: M = H⊤H (3) where M is a square matrix ∈RT×T . To get a single attention weight per time step, we adopt the attention-over-attention method (Cui et al., 2017). With matrix M, we first compute row-wise attention score Mr ij over M: Mr ij = exp(eij) PT t=1 exp(eit) (4) where eij = Mij. Mr can be interpreted as a word-level attention distribution over all other words. Since we would like a single weight per word, we need an additional step to aggregate these attention scores. Instead of simply averaging the scores, we follow (Cui et al., 2017)’s approach which learns the aggregation by an additional attention mechanism. We compute columnwise softmax Mc ij over M: Mc ij = exp(eij) PT t=1 exp(etj) (5) The columns of Mr are then averaged, forming vector β ∈RT . Finally, β is multiplied with the column-wise softmax matrix Mc to get attention vector α: α = Mr⊤β. (6) 2752 Note Equations (2) to (6) have described how we derived an attention score over our input without the introduction of any new parameters, potentially minimizing the computational effect of our attention mechanism. As a last layer to their neural network, Cui et al. (2017) sum over α to extract the most relevant input. However, we use α as weights to combine all of our hidden states ht: c = T X t=1 αtht (7) where c ∈Rs. We follow the pooling with a dense layer z = σ(Wzc + bz), where σ is a non-linear function, matrix Wz ∈R64×s and vector bz ∈R64 are learned parameters. The presupposition trigger probability is computed with an affine transform followed by a softmax: ˆy = softmax(Woz + bo) (8) where matrix Wo ∈R2×64 and vector bo ∈R2 are learned parameters. The training objective minimizes: J(θ) = 1 m m X t=1 E(ˆy, y) (9) where E(· , ·) is the standard cross-entropy. 5 Experiments We compare the performance of our WP model against several models which we describe in this section. We carry out the experiments on both datasets described in Section 3. We also investigate the impact of POS tags and attention mechanism on the models’ prediction accuracy. 5.1 Baselines We compare our learning model against the following systems. The first is the most-frequentclass baseline (MFC) which simply labels all samples with the most frequent class of 1. The second is a logistic regression classifier (LogReg), in which the probabilities describing the possible outcomes of a single input x is modeled using a logistic function. We implement this baseline classifier with the scikit-learn package (Pedregosa et al., 2011), with a CountVectorizer including bi-gram features. All of the other hyperparameters are set to default weights. The third is a variant LSTM recurrent neural network as introduced in (Graves, 2013). The input is encoded by a bidirectional LSTM like the WP model detailed in Section 4. Instead of a self-attention mechanism, we simply mean-pool matrix H, the concatenation of all LSTM hidden states, across all time steps. This is followed by a fully connected layer and softmax function for the binary classification. Our WP model uses the same bidirectional LSTM as this baseline LSTM, and has the same number of parameters, allowing for a fair comparison of the two models. Such a standard LSTM model represents a state-of-the-art language model, as it outperforms more recent models on language modeling tasks when the number of model parameters is controlled for (Melis et al., 2017). For the last model, we use a slight variant of the CNN sentence classification model of (Kim, 2014) based on the Britz tensorflow implementation2. 5.2 Hyperparameters & Additional Features After tuning, we found the following hyperparameters to work best: 64 units in fully connected layers and 40 units for POS embeddings. We used dropout with probability 0.5 and mini-batch size of 64. For all models, we initialize word embeddings with word2vec (Mikolov et al., 2013) pretrained embeddings of size 300. Unknown words are randomly initialized to the same size as the word2vec embeddings. In early tests on the development datasets, we found that our neural networks would consistently perform better when fixing the word embeddings. All neural network performance reported in this paper use fixed embeddings. Fully connected layers in the LSTM, CNN and WP model are regularized with dropout (Srivastava et al., 2014). The model parameters for these neural networks are fine-tuned with the Adam algorithm (Kingma and Ba, 2015). To stabilize the RNN training gradients (Pascanu et al., 2013), we perform gradient clipping for gradients below threshold value -1, or above 1. To reduce overfitting, we stop training if the development set does not improve in accuracy for 10 epochs. All performance on the test set is reported using the best trained model as measured on the development set. In addition, we use the CoreNLP Part-of2http://www.wildml.com/2015/12/implementing-a-cnnfor-text-classification-in-tensorflow/ 2753 Accuracy WSJ Gigaword Models Variants All adverbs All adverbs Also Still Again Too Yet MFC 51.66 50.24 50.32 50.29 50.25 65.06 50.19 LogReg + POS 52.81 53.65 52.00 56.36 59.49 69.77 61.05 - POS 54.47 52.86 56.07 55.29 58.60 67.60 58.60 CNN + POS 58.84 59.12 61.53 59.54 60.26 67.53 59.69 - POS 62.16 57.21 59.76 56.95 57.28 67.84 56.53 LSTM + POS 74.23 60.58 81.48 60.72 61.81 69.70 59.13 - POS 73.18 58.86 81.16 58.97 59.93 68.32 55.71 WP + POS 76.09 60.62 82.42 61.00 61.59 69.38 57.68 - POS 74.84 58.87 81.64 59.03 58.49 68.37 56.68 Table 2: Performance of various models, including our weighted-pooled LSTM (WP). MFC refers to the most-frequent-class baseline, LogReg is the logistic regression baseline. LSTM and CNN correspond to strong neural network baselines. Note that we bold the performance numbers for the best performing model for each of the “+ POS” case and the “- POS” case. Speech (POS) tagger (Manning et al., 2014) to get corresponding POS features for extracted tokens. In all of our models, we limit the maximum length of samples and POS tags to 60 tokens. For the CNN, sequences shorter than 60 tokens are zeropadded. 6 Results Table 2 shows the performance obtained by the different models with and without POS tags. Overall, our attention model WP outperforms all other models in 10 out of 14 scenarios (combinations of datasets and whether or not POS tags are used). Importantly, our model outperforms the regular LSTM model without introducing additional parameters to the model, which highlights the advantage of WP’s attention-based pooling method. For all models listed in Table 2, we find that including POS tags benefits the detection of adverbial presupposition triggers in Gigaword and PTB datasets. Note that, in Table 2, we bolded accuracy figures that were within 0.1% of the best performing WP model as McNemar’s test did not show that WP significantly outperformed the other model in these cases (p value > 0.05). Table 3 shows the confusion matrix for the best performing model (WP,+POS). The small differences in the off-diagonal entries inform us that the model misclassifications are not particularly skewed towards the presence or absence of presupposition triggers. Predicted Actual Absence Presence Absence 54,658 11,961 Presence 11,776 55,006 Table 3: Confusion matrix for the best performing model, predicting the presence of a presupposition trigger or the absence of such as trigger. WP Cor. WP Inc. LSTM Cor. 101,443 6,819 LSTM Inc. 8,016 17,123 Table 4: Contingency table for correct (cor.) and incorrect (inc.) predictions between the LSTM baseline and the attention model (WP) on the Giga_also dataset. The contingency table, shown in Table 4, shows the distribution of agreed and disagreed classification. 7 Analysis Consider the following pair of samples that we randomly choose from the PTB dataset (shortened for readability): 1. ...Taped just as the market closed yesterday , it offers Ms. Farrell advising , " We view 2754 the market here as going through a relatively normal cycle ... . We continue to feel that the stock market is the @@@@ place to be for long-term appreciation 2. ...More people are remaining independent longer presumably because they are better off physically and financially . Careers count most for the well-to-do many affluent people @@@@ place personal success and money above family In both cases, the head word is place. In Example 1, the word continue (emphasized in the above text) suggests that adverb still could be used to modify head word place (i.e., ... the stock market is still the place ...). Further, it is also easy to see that place refers to stock market, which has occurred in the previous context. Our model correctly predicts this sample as containing a presupposition, this despite the complexity of the coreference across the text. In the second case of the usage of the same main head word place in Example 2, our model falsely predicts the presence of a presupposition. However, even a human could read the sentence as “many people still place personal success and money above family”. This underlies the subtlety and difficulty of the task at hand. The longrange dependencies and interactions within sentences seen in these examples are what motivate the use of the various deep non-linear models presented in this work, which are useful in detecting these coreferences, particularly in the case of attention mechanisms. 8 Conclusion In this work, we have investigated the task of predicting adverbial presupposition triggers and introduced several datasets for the task. Additionally, we have presented a novel weighted-pooling attention mechanism which is incorporated into a recurrent neural network model for predicting the presence of an adverbial presuppositional trigger. Our results show that the model outperforms the CNN and LSTM, and does not add any additional parameters over the standard LSTM model. This shows its promise in classification tasks involving capturing and combining relevant information from multiple points in the previous context. In future work, we would like to focus more on designing models that can deal with and be optimized for scenarios with severe data imbalance. We would like to also explore various applications of presupposition trigger prediction in language generation applications, as well as additional attention-based neural network architectures. Acknowledgements The authors would like to thank the reviewers for their valuable comments. This work was supported by the Centre de Recherche d’Informatique de Montréal (CRIM), the Fonds de Recherche du Québec – Nature et Technologies (FRQNT) and the Natural Sciences and Engineering Research Council of Canada (NSERC). References Dzmitry Bahdanau, Kyunghyun Cho, and Yoshua Bengio. 2015. Neural machine translation by jointly learning to align and translate. In Advances in Neural Information Processing Systems (NIPS 2015), pages 649–657, Montreal, Canada. David I. Beaver and Bart Geurts. 2014. Presupposition. In Edward N. Zalta, editor, The Stanford Encyclopedia of Philosophy, winter 2014 edition. Metaphysics Research Lab, Stanford University. R. Blutner, H. Zeevat, K. Bach, A. Bezuidenhout, R. Breheny, S. Glucksberg, F. Happé, F. Recanati, and D. Wilson. 2003. Optimality Theory and Pragmatics. Palgrave Studies in Pragmatics, Language and Cognition. Palgrave Macmillan UK. Yiming Cui, Zhipeng Chen, Si Wei, Shijin Wang, Ting Liu, and Guoping Hu. 2017. Attention-overattention neural networks for reading comprehension. In Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 593–602. G. R. Miller E. B. Coleman. 1968. A measure of information gained during prose learning. Reading Research Quarterly, 3(3):369–386. Joseph W. Culhane Earl F. Rankin. 1969. Comparable cloze and multiple-choice comprehension test scores. Journal of Reading, 13(3):193–198. Gottlob Frege. 1892. Über sinn und bedeutung. Zeitschrift für Philosophie und philosophische Kritik, 100:25–50. David Graff, Junbo Kong, Ke Chen, and Kazuaki Maeda. 2007. English gigaword third edition. Technical report, Linguistic Data Consortium. Alex Graves. 2013. Generating sequences with recurrent neural networks. CoRR, abs/1308.0850. 2755 Felix Hill, Antoine Bordes, Sumit Chopra, and Jason Weston. 2015. The goldilocks principle: Reading children’s books with explicit memory representations. CoRR, abs/1511.02301. Sepp Hochreiter and Jürgen Schmidhuber. 1997. Long short-term memory. Neural Computation, 9(8):1735–1780. Jad Kabbara, Yulan Feng, and Jackie Chi Kit Cheung. 2016. Capturing pragmatic knowledge in article usage prediction using lstms. In COLING, pages 2625–2634. Qiang Kang. 2012. The use of too as a pragmatic presupposition trigger. Canadian Social Science, 8(6):165–169. David Kaplan. 1970. What is Russell’s theory of descriptions? In Wolfgang Yourgrau and Allen D. Breck, editors, Physics, Logic, and History, pages 277–295. Plenum Press. Layth Muthana Khaleel. 2010. An analysis of presupposition triggers in english journalistic texts. Of College Of Education For Women, 21(2):523–551. Yoon Kim. 2014. Convolutional neural networks for sentence classification. In Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing (EMNLP), pages 1746–1751, Doha, Qatar. Diederik Kingma and Jimmy Ba. 2015. Adam: A method for stochastic optimization. In Proceeding of the 2015 International Conference on Learning Representation (ICLR 2015), San Diego, California. Dan Klein and Christopher D. Manning. 2002. A generative constituent-context model for improved grammar induction. In Proceedings of the 40th Annual Meeting on Association for Computational Linguistics, ACL ’02, pages 128–135, Stroudsburg, PA, USA. Association for Computational Linguistics. Christopher D. Manning, Mihai Surdeanu, John Bauer, Jenny Finkel, Steven J. Bethard, and David McClosky. 2014. The Stanford CoreNLP natural language processing toolkit. In Association for Computational Linguistics (ACL) System Demonstrations, pages 55–60. Mitchell P. Marcus, Mary A. Marcinkiewicz, and Beatrice Santorini. 1993. Building a large annotated corpus of English: The Penn Treebank. Computational Linguistics, 19(2):313–330. Gábor Melis, Chris Dyer, and Phil Blunsom. 2017. On the state of the art of evaluation in neural language models. CoRR, abs/1707.05589. Tomas Mikolov, Ilya Sutskever, Kai Chen, Greg S Corrado, and Jeff Dean. 2013. Distributed representations of words and phrases and their compositionality. In Advances in Neural Information Processing Systems, pages 3111–3119. Razvan Pascanu, Tomas Mikolov, and Yoshua Bengio. 2013. On the difficulty of training recurrent neural networks. In Proceedings of the 30th International Conference on International Conference on Machine Learning - Volume 28, ICML’13, pages 1310–1318. Romain Paulus, Caiming Xiong, and Richard Socher. 2018. A deep reinforced model for abstractive summarization. In International Conference on Learning Representations, Vancouver, Canada. F. Pedregosa, G. Varoquaux, A. Gramfort, V. Michel, B. Thirion, O. Grisel, M. Blondel, P. Prettenhofer, R. Weiss, V. Dubourg, J. Vanderplas, A. Passos, D. Cournapeau, M. Brucher, M. Perrot, and E. Duchesnay. 2011. Scikit-learn: Machine learning in Python. Journal of Machine Learning Research, 12:2825–2830. Ellen Riloff and Michael Thelen. 2000. A rule-based question answering system for reading comprehension tests. In Proceedings of the 2000 ANLP/NAACL Workshop on Reading Comprehension Tests As Evaluation for Computer-based Language Understanding Sytems - Volume 6, ANLP/NAACLReadingComp ’00, pages 13–19, Stroudsburg, PA, USA. Association for Computational Linguistics. Nitish Srivastava, Geoffrey E Hinton, Alex Krizhevsky, Ilya Sutskever, and Ruslan Salakhutdinov. 2014. Dropout: a simple way to prevent neural networks from overfitting. Journal of machine learning research, 15(1):1929–1958. Robert Stalnaker. 1973. Presuppositions. Journal of philosophical logic, 2(4):447–457. Robert Stalnaker. 1998. On the representation of context. Journal of Logic, Language and Information, 7(1):3–19. Peter F. Strawson. 1950. On referring. Mind, 59(235):320–344. Sainbayar Sukhbaatar, Arthur Szlam, Jason Weston, and Rob Fergus. 2015. End-to-end memory networks. In Advances in Neural Information Processing Systems 28, pages 2440–2448. Wilson L. Taylor. 1953. Cloze procedure: A new tool for measuring readability. Journalism Quarterly, 30(4):415. Ashish Vaswani, Noam Shazeer, Niki Parmar, Llion Jones, Jakob Uszkoreit, Aidan N Gomez, and Ł ukasz Kaiser. 2017. Attention is all you need. In Advances in Neural Information Processing Systems 30, pages 5994–6004. Javad Zare, Ehsan Abbaspour, and Mahdi Rajaee Nia. 2012. Presupposition trigger—a comparative analysis of broadcast news discourse. International Journal of Linguistics, 4(3):734–743.
2018
256
Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Long Papers), pages 273–283 Melbourne, Australia, July 15 - 20, 2018. c⃝2018 Association for Computational Linguistics 273 Graph-to-Sequence Learning using Gated Graph Neural Networks Daniel Beck† Gholamreza Haffari‡ Trevor Cohn† †School of Computing and Information Systems University of Melbourne, Australia {d.beck,t.cohn}@unimelb.edu.au ‡Faculty of Information Technology Monash University, Australia [email protected] Abstract Many NLP applications can be framed as a graph-to-sequence learning problem. Previous work proposing neural architectures on this setting obtained promising results compared to grammar-based approaches but still rely on linearisation heuristics and/or standard recurrent networks to achieve the best performance. In this work, we propose a new model that encodes the full structural information contained in the graph. Our architecture couples the recently proposed Gated Graph Neural Networks with an input transformation that allows nodes and edges to have their own hidden representations, while tackling the parameter explosion problem present in previous work. Experimental results show that our model outperforms strong baselines in generation from AMR graphs and syntax-based neural machine translation. 1 Introduction Graph structures are ubiquitous in representations of natural language. In particular, many wholesentence semantic frameworks employ directed acyclic graphs as the underlying formalism, while most tree-based syntactic representations can also be seen as graphs. A range of NLP applications can be framed as the process of transducing a graph structure into a sequence. For instance, language generation may involve realising a semantic graph into a surface form and syntactic machine translation involves transforming a tree-annotated source sentence to its translation. Previous work in this setting rely on grammarbased approaches such as tree transducers (Flanigan et al., 2016) and hyperedge replacement grammars (Jones et al., 2012). A key limitation of these approaches is that alignments between graph nodes and surface tokens are required. These alignments are usually automatically generated so they can propagate errors when building the grammar. More recent approaches transform the graph into a linearised form and use off-the-shelf methods such as phrase-based machine translation (Pourdamghani et al., 2016) or neural sequenceto-sequence (henceforth, s2s) models (Konstas et al., 2017). Such approaches ignore the full graph structure, discarding key information. In this work we propose a model for graph-tosequence (henceforth, g2s) learning that leverages recent advances in neural encoder-decoder architectures. Specifically, we employ an encoder based on Gated Graph Neural Networks (Li et al., 2016, GGNNs), which can incorporate the full graph structure without loss of information. Such networks represent edge information as label-wise parameters, which can be problematic even for small sized label vocabularies (in the order of hundreds). To address this limitation, we also introduce a graph transformation that changes edges to additional nodes, solving the parameter explosion problem. This also ensures that edges have graphspecific hidden vectors, which gives more information to the attention and decoding modules in the network. We benchmark our model in two graph-tosequence problems, generation from Abstract Meaning Representations (AMRs) and Neural Machine Translation (NMT) with source dependency information. Our approach outperforms strong s2s baselines in both tasks without relying on standard RNN encoders, in contrast with previous work. In particular, for NMT we show that we avoid the need for RNNs by adding sequential edges between contiguous words in the dependency tree. This illustrates the generality of our 274 want-01 believe-01 boy girl ARG0 ARG1 ARG0 ARG1 Figure 1: Left: the AMR graph representing the sentence “The boy wants the girl to believe him.”. Right: Our proposed architecture using the same AMR graph as input and the surface form as output. The first layer is a concatenation of node and positional embeddings, using distance from the root node as the position. The GGNN encoder updates the embeddings using edge-wise parameters, represented by different colors (in this example, ARG0 and ARG1). The encoder also add corresponding reverse edges (dotted arrows) and self edges for each node (dashed arrows). All parameters are shared between layers. Attention and decoder components are similar to standard s2s models. This is a pictorial representation: in our experiments the graphs are transformed before being used as inputs (see §3). approach: linguistic biases can be added to the inputs by simple graph transformations, without the need for changes to the model architecture. 2 Neural Graph-to-Sequence Model Our proposed architecture is shown in Figure 1, with an example AMR graph and its transformation into its surface form. Compared to standard s2s models, the main difference is in the encoder, where we employ a GGNN to build a graph representation. In the following we explain the components of this architecture in detail.1 2.1 Gated Graph Neural Networks Early approaches for recurrent networks on graphs (Gori et al., 2005; Scarselli et al., 2009) assume a fixed point representation of the parameters and learn using contraction maps. Li et al. (2016) argues that this restricts the capacity of the model and makes it harder to learn long distance relations between nodes. To tackle these issues, they propose Gated Graph Neural Networks, which extend these architectures with gating mechanisms 1Our implementation uses MXNet (Chen et al., 2015) and is based on the Sockeye toolkit (Hieber et al., 2017). Code is available at github.com/beckdaniel/acl2018_ graph2seq. in a similar fashion to Gated Recurrent Units (Cho et al., 2014). This allows the network to be learnt via modern backpropagation procedures. In following, we formally define the version of GGNNs we employ in this study. Assume a directed graph G = {V, E, LV, LE}, where V is a set of nodes (v, ℓv), E is a set of edges (vi, vj, ℓe) and LV and LE are respectively vocabularies for nodes and edges, from which node and edge labels (ℓv and ℓe) are defined. Given an input graph with nodes mapped to embeddings X, a GGNN is defined as h0 v = xv rt v = σ cr v X u∈Nv Wr ℓeh(t−1) u + br ℓe ! zt v = σ cz v X u∈Nv Wz ℓeh(t−1) u + bz ℓe ! eht v = ρ cv X u∈Nv Wℓe  rt u ⊙h(t−1) u  + bℓe ! ht v = (1 −zt v) ⊙h(i−1) v + zt v ⊙eht v where e = (u, v, ℓe) is the edge between nodes u and v, N(v) is the set of neighbour nodes for v, ρ is a non-linear function, σ is the sigmoid function 275 and cv = cz v = cr v = |Nv|−1 are normalisation constants. Our formulation differs from the original GGNNs from Li et al. (2016) in some aspects: 1) we add bias vectors for the hidden state, reset gate and update gate computations; 2) labelspecific matrices do not share any components; 3) reset gates are applied to all hidden states before any computation and 4) we add normalisation constants. These modifications were applied based on preliminary experiments and ease of implementation. An alternative to GGNNs is the model from Marcheggiani and Titov (2017), which add edge label information to Graph Convolutional Networks (GCNs). According to Li et al. (2016), the main difference between GCNs and GGNNs is analogous to the difference between convolutional and recurrent networks. More specifically, GGNNs can be seen as multi-layered GCNs where layer-wise parameters are tied and gating mechanisms are added. A large number of layers can propagate node information between longer distances in the graph and, unlike GCNs, GGNNs can have an arbitrary number of layers without increasing the number of parameters. Nevertheless, our architecture borrows ideas from GCNs as well, such as normalising factors. 2.2 Using GGNNs in attentional encoder-decoder models In s2s models, inputs are sequences of tokens where each token is represented by an embedding vector. The encoder then transforms these vectors into hidden states by incorporating context, usually through a recurrent or a convolutional network. These are fed into an attention mechanism, generating a single context vector that informs decisions in the decoder. Our model follows a similar structure, where the encoder is a GGNN that receives node embeddings as inputs and generates node hidden states as outputs, using the graph structure as context. This is shown in the example of Figure 1, where we have 4 hidden vectors, one per node in the AMR graph. The attention and decoder components follow similar standard s2s models, where we use a bilinear attention mechanism (Luong et al., 2015) and a 2-layered LSTM (Hochreiter and Schmidhuber, 1997) as the decoder. Note, however, that other decoders and attention mechanisms can be easily employed instead. Bastings et al. (2017) employs a similar idea for syntax-based NMT, but using GCNs instead. 2.3 Bidirectionality and positional embeddings While our architecture can in theory be used with general graphs, rooted directed acyclic graphs (DAGs) are arguably the most common kind in the problems we are addressing. This means that node embedding information is propagated in a top down manner. However, it is desirable to have information flow from the reverse direction as well, in the same way RNN-based encoders benefit from right-to-left propagation (as in bidirectional RNNs). Marcheggiani and Titov (2017) and Bastings et al. (2017) achieve this by adding reverse edges to the graph, as well as self-loops edges for each node. These extra edges have specific labels, hence their own parameters in the network. In this work, we also follow this procedure to ensure information is evenly propagated in the graph. However, this raises another limitation: because the graph becomes essentially undirected, the encoder is now unaware of any intrinsic hierarchy present in the input. Inspired by Gehring et al. (2017) and Vaswani et al. (2017), we tackle this problem by adding positional embeddings to every node. These embeddings are indexed by integer values representing the minimum distance from the root node and are learned as model parameters.2 This kind of positional embedding is restricted to rooted DAGs: for general graphs, different notions of distance could be employed. 3 Levi Graph Transformation The g2s model proposed in §2 has two key deficiencies. First, GGNNs have three linear transformations per edge type. This means that the number of parameters can explode: AMR, for instance, has around 100 different predicates, which correspond to edge labels. Previous work deal with this problem by explicitly grouping edge labels into a single one (Marcheggiani and Titov, 2017; Bastings et al., 2017) but this is not an ideal solution since it incurs in loss of information. 2Vaswani et al. (2017) also proposed fixed positional embeddings based on sine and cosine wavelengths. Preliminary experiments showed that this approach did not work in our case: we speculate this is because wavelengths are more suitable to sequential inputs. 276 want-01 believe-01 boy girl ARG1 ARG0 ARG1 ARG0 want-01 believe-01 boy girl ARG1 ARG0 ARG1 ARG0 Figure 2: Top: the AMR graph from Figure 1 transformed into its corresponding Levi graph. Bottom: Levi graph with added reverse and self edges (colors represent different edge labels). The second deficiency is that edge label information is encoded in the form of GGNN parameters in the network. This means that each label will have the same “representation” across all graphs. However, the latent information in edges can depend on the content in which they appear in a graph. Ideally, edges should have instance-specific hidden states, in the same way as nodes, and these should also inform decisions made in the decoder through the attention module. For instance, in the AMR graph shown in Figure 1, the ARG1 predicate between want-01 and believe-01 can be interpreted as the preposition “to” in the surface form, while the ARG1 predicate connecting believe-01 and boy is realised as a pronoun. Notice that edge hidden vectors are already present in s2s networks that use linearised graphs: we would like our architecture to also have this benefit. Instead of modifying the architecture, we propose to transform the input graph into its equivalent Levi graph (Levi, 1942; Gross and Yellen, 2004, p. 765). Given a graph G = {V, E, LV, LE}, a Levi graph3 is defined as G = {V′, E′, LV′, LE′}, where V′ = V ∪E, LV′ = LV ∪LE and LE′ = ∅. The new edge set E′ contains a edge for every (node, edge) pair that is present in the original graph. By definition, the Levi graph is bipartite. Intuitively, transforming a graph into its Levi graph equivalent turns edges into additional nodes. While simple in theory, this transformation addresses both modelling deficiencies mentioned above in an elegant way. Since the Levi graph has no labelled edges there is no risk of parameter explosion: original edge labels are represented as embeddings, in the same way as nodes. Furthermore, the encoder now naturally generates hidden states for original edges as well. In practice, we follow the procedure in §2.3 and add reverse and self-loop edges to the Levi graph, so the practical edge label vocabulary is LE′ = {default, reverse, self}. This still keeps the parameter space modest since we have only three labels. Figure 2 shows the transformation steps in detail, applied to the AMR graph shown in Figure 1. Notice that the transformed graphs are the ones fed into our architecture: we show the original graph in Figure 1 for simplicity. It is important to note that this transformation can be applied to any graph and therefore is independent of the model architecture. We speculate this can be beneficial in other kinds of graph-based encoder such as GCNs and leave further investigation to future work. 4 Generation from AMR Graphs Our first g2s benchmark is language generation from AMR, a semantic formalism that represents sentences as rooted DAGs (Banarescu et al., 2013). Because AMR abstracts away from syntax, graphs do not have gold-standard alignment information, so generation is not a trivial task. Therefore, we hypothesize that our proposed model is ideal for this problem. 4.1 Experimental setup Data and preprocessing We use the latest AMR corpus release (LDC2017T10) with the default split of 36521/1368/1371 instances for training, 3Formally, a Levi graph is defined over any incidence structure, which is a general concept usually considered in a geometrical context. Graphs are an example of incidence structures but so are points and lines in the Euclidean space, for instance. 277 development and test sets. Each graph is preprocessed using a procedure similar to what is performed by Konstas et al. (2017), which includes entity simplification and anonymisation. This preprocessing is done before transforming the graph into its Levi graph equivalent. For the s2s baselines, we also add scope markers as in Konstas et al. (2017). We detail these procedures in the Supplementary Material. Models Our baselines are attentional s2s models which take linearised graphs as inputs. The architecture is similar to the one used in Konstas et al. (2017) for AMR generation, where the encoder is a BiLSTM followed by a unidirectional LSTM. All dimensionalities are fixed to 512. For the g2s models, we fix the number of layers in the GGNN encoder to 8, as this gave the best results on the development set. Dimensionalities are also fixed at 512 except for the GGNN encoder which uses 576. This is to ensure all models have a comparable number of parameters and therefore similar capacity. Training for all models uses Adam (Kingma and Ba, 2015) with 0.0003 initial learning rate and 16 as the batch size.4 To regularise our models we perform early stopping on the dev set based on perplexity and apply 0.5 dropout (Srivastava et al., 2014) on the source embeddings. We detail additional model and training hyperparameters in the Supplementary Material. Evaluation Following previous work, we evaluate our models using BLEU (Papineni et al., 2001) and perform bootstrap resampling to check statistical significance. However, since recent work has questioned the effectiveness of BLEU with bootstrap resampling (Graham et al., 2014), we also report results using sentence-level CHRF++ (Popovi´c, 2017), using the Wilcoxon signed-rank test to check significance. Evaluation is case-insensitive for both metrics. Recent work has shown that evaluation in neural models can lead to wrong conclusions by just changing the random seed (Reimers and Gurevych, 2017). In an effort to make our conclusions more robust, we run each model 5 times using different seeds. From each pool, we report 4Larger batch sizes hurt dev performance in our preliminary experiments. There is evidence that small batches can lead to better generalisation performance (Keskar et al., 2017). While this can make training time slower, it was doable in our case since the dataset is small. BLEU CHRF++ #params Single models s2s 21.7 49.1 28.4M s2s (-s) 18.4 46.3 28.4M g2s 23.3 50.4 28.3M Ensembles s2s 26.6 52.5 142M s2s (-s) 22.0 48.9 142M g2s 27.5 53.5 141M Previous work (early AMR treebank versions) KIYCZ17 22.0 – – Previous work (as above + unlabelled data) KIYCZ17 33.8 – – PKH16 26.9 – – SPZWG17 25.6 – – FDSC16 22.0 – – Table 1: Results for AMR generation on the test set. All score differences between our models and the corresponding baselines are significantly different (p<0.05). “(-s)” means input without scope marking. KIYCZ17, PKH16, SPZWG17 and FDSC16 are respectively the results reported in Konstas et al. (2017), Pourdamghani et al. (2016), Song et al. (2017) and Flanigan et al. (2016). results using the median model according to performance on the dev set (simulating what is expected from a single run) and using an ensemble of the 5 models. Finally, we also report the number of parameters used in each model. Since our encoder architectures are quite different, we try to match the number of parameters between them by changing the dimensionality of the hidden layers (as explained above). We do this to minimise the effects of model capacity as a confounder. 4.2 Results and analysis Table 1 shows the results on the test set. For the s2s models, we also report results without the scope marking procedure of Konstas et al. (2017). Our approach significantly outperforms the s2s baselines both with individual models and ensembles, while using a comparable number of parameters. In particular, we obtain these results without relying on scoping heuristics. On Figure 3 we show an example where our model outperforms the baseline. The AMR graph contains four reentrancies, predicates that refer278 Original AMR graph (p / propose-01 :ARG0 (c / country :wiki "Russia" :name (n / name :op1 "Russia")) :ARG1 (c5 / cooperate-01 :ARG0 c :ARG1 (a / and :op1 (c2 / country :wiki "India" :name (n2 / name :op1 "India")) :op2 (c3 / country :wiki "China" :name (n3 / name :op1 "China")))) :purpose (i / increase-01 :ARG0 c5 :ARG1 (s / security) :location (a2 / around :op1 (c4 / country :wiki "Afghanistan" :name (n4 / name :op1 "Afghanistan"))) :purpose (b / block-01 :ARG0 (a3 / and :op1 c :op2 c2 :op3 c3 :ARG1 (s2 / supply-01 :ARG1 (d / drug))))) Reference surface form Russia proposes cooperation with India and China to increase security around Afghanistan to block drug supplies. s2s output (CHRF++ 61.8) Russia proposed cooperation with India and China to increase security around the Afghanistan to block security around the Afghanistan , India and China. g2s output (CHRF++ 78.2) Russia proposed cooperation with India and China to increase security around Afghanistan to block drug supplies. Figure 3: Example showing overgeneration due to reentrancies. Top: original AMR graph with key reentrancies highlighted. Bottom: reference and outputs generated by the s2s and g2s models, highlighting the overgeneration phenomena. ence previously defined concepts in the graph. In the s2s models including Konstas et al. (2017), reentrant nodes are copied in the linearised form, while this is not necessary for our g2s models. We can see that the s2s prediction overgenerates the “India and China” phrase. The g2s prediction avoids overgeneration, and almost perfectly matches the reference. While this is only a single example, it provides evidence that retaining the full graphical structure is beneficial for this task, which is corroborated by our quantitative results. Table 1 also show BLEU scores reported in previous work. These results are not strictly comparable because they used different training set versions and/or employ additional unlabelled corpora; nonetheless some insights can be made. In particular, our g2s ensemble performs better than many previous models that combine a smaller training set with a large unlabelled corpus. It is also most informative to compare our s2s model with Konstas et al. (2017), since this baseline is very similar to theirs. We expected our single model baseline to outperform theirs since we use a larger training set but we obtained similar performance. We speculate that better results could be obtained by more careful tuning, but nevertheless we believe such tuning would also benefit our proposed g2s architecture. The best results with unlabelled data are obtained by Konstas et al. (2017) using Gigaword sentences as additional data and a paired trained procedure with an AMR parser. It is important to note that this procedure is orthogonal to the individual models used for generation and parsing. Therefore, we hypothesise that our model can also benefit from such techniques, an avenue that we leave for future work. 5 Syntax-based Neural Machine Translation Our second evaluation is NMT, using as graphs source language dependency syntax trees. We focus on a medium resource scenario where additional linguistic information tends to be more beneficial. Our experiments comprise two language pairs: English-German and English-Czech. 5.1 Experimental setup Data and preprocessing We employ the same data and settings from Bastings et al. (2017),5 which use the News Commentary V11 corpora from the WMT16 translation task.6 English text is tokenised and parsed using SyntaxNet7 while German and Czech texts are tokenised and split into subwords using byte-pair encodings (Sennrich et al., 2016, BPE) (8000 merge operations). 5We obtained the data from the original authors to ensure results are comparable without any influence from preprocessing steps. 6http://www.statmt.org/wmt16/ translation-task.html 7https://github.com/tensorflow/models/ tree/master/syntaxnet 279 We refer to Bastings et al. (2017) for further information on the preprocessing steps. Labelled dependency trees in the source side are transformed into Levi graphs as a preprocessing step. However, unlike AMR generation, in NMT the inputs are originally surface forms that contain important sequential information. This information is lost when treating the input as dependency trees, which might explain why Bastings et al. (2017) obtain the best performance when using an initial RNN layer in their encoder. To investigate this phenomenon, we also perform experiments adding sequential connections to each word in the dependency tree, corresponding to their order in the original surface form (henceforth, g2s+). These connections are represented as edges with specific left and right labels, which are added after the Levi graph transformation. Figure 4 shows an example of an input graph for g2s+, with the additional sequential edges connecting the words (reverse and self edges are omitted for simplicity). Models Our s2s and g2s models are almost the same as in the AMR generation experiments (§4.1). The only exception is the GGNN encoder dimensionality, where we use 512 for the experiments with dependency trees only and 448 when the inputs have additional sequential connections. As in the AMR generation setting, we do this to ensure model capacity are comparable in the number of parameters. Another key difference is that the s2s baselines do not use dependency trees: they are trained on the sentences only. In addition to neural models, we also report results for Phrase-Based Statistical MT (PB-SMT), using Moses (Koehn et al., 2007). The PB-SMT models are trained using the same data conditions as s2s (no dependency trees) and use the standard setup in Moses, except for the language model, where we use a 5-gram LM trained on the target side of the respective parallel corpus.8 Evaluation We report results in terms of BLEU and CHRF++, using case-sensitive versions of both metrics. Other settings are kept the same as in the AMR generation experiments (§4.1). For PBSMT, we also report the median result of 5 runs, obtained by tuning the model using MERT (Och and Ney, 2002) 5 times. 8Note that target data is segmented using BPE, which is not the usual setting for PB-SMT. We decided to keep the segmentation to ensure data conditions are the same. There is a deeper issue at stake . ROOT expl nsubj punct det amod prep pobj There is a deeper issue at stake . ROOT expl nsubj punct det amod prep pobj Figure 4: Top: a sentence with its corresponding dependency tree. Bottom: the transformed tree into a Levi graph with additional sequential connections between words (dashed lines). The full graph also contains reverse and self edges, which are omitted in the figure. 5.2 Results and analysis Table 2 shows the results on the respective test set for both language pairs. The g2s models, which do not account for sequential information, lag behind our baselines. This is in line with the findings of Bastings et al. (2017), who found that having a BiRNN layer was key to obtain the best results. However, the g2s+ models outperform the baselines in terms of BLEU scores under the same parameter budget, in both single model and ensemble scenarios. This result show that it is possible to incorporate sequential biases in our model without relying on RNNs or any other modification in the architecture. 280 English-German BLEU CHRF++ #params Single models PB-SMT 12.8 43.2 – s2s 15.5 40.8 41.4M g2s 15.2 41.4 40.8M g2s+ 16.7 42.4 41.2M Ensembles s2s 19.0 44.1 207M g2s 17.7 43.5 204M g2s+ 19.6 45.1 206M Results from (Bastings et al., 2017) BoW+GCN 12.2 – – BiRNN 14.9 – – BiRNN+GCN 16.1 – – English-Czech BLEU CHRF++ #params Single models PB-SMT 8.6 36.4 – s2s 8.9 33.8 39.1M g2s 8.7 32.3 38.4M g2s+ 9.8 33.3 38.8M Ensembles s2s 11.3 36.4 195M g2s 10.4 34.7 192M g2s+ 11.7 35.9 194M Results from (Bastings et al., 2017) BoW+GCN 7.5 – – BiRNN 8.9 – – BiRNN+GCN 9.6 – – Table 2: Results for syntax-based NMT on the test sets. All score differences between our models and the corresponding baselines are significantly different (p<0.05), including the negative CHRF++ result for En-Cs. Interestingly, we found different trends when analysing the CHRF++ numbers. In particular, this metric favours the PB-SMT models for both language pairs, while also showing improved performance for s2s in En-Cs. CHRF++ has been shown to better correlate with human judgments compared to BLEU, both at system and sentence level for both language pairs (Bojar et al., 2017), which motivated our choice as an additional metric. We leave further investigation of this phenomena for future work. We also show some of the results reported by Bastings et al. (2017) in Table 2. Note that their results were based on a different implementation, which may explain some variation in performance. Their BoW+GCN model is the most similar to ours, as it uses only an embedding layer and a GCN encoder. We can see that even our simpler g2s model outperforms their results. A key difference between their approach and ours is the Levi graph transformation and the resulting hidden vectors for edges. We believe their architecture would also benefit from our proposed transformation. In terms of baselines, s2s performs better than their BiRNN model for En-De and comparably for En-Cs, which corroborates that our baselines are strong ones. Finally, our g2s+ single models outperform their BiRNN+GCN results, in particular for En-De, which is further evidence that RNNs are not necessary for obtaining the best performance in this setting. An important point about these experiments is that we did not tune the architecture: we simply employed the same model we used in the AMR generation experiments, only adjusting the dimensionality of the encoder to match the capacity of the baselines. We speculate that even better results would be obtained by tuning the architecture to this task. Nevertheless, we still obtained improved performance over our baselines and previous work, underlining the generality of our architecture. 6 Related work Graph-to-sequence modelling Early NLP approaches for this problem were based on Hyperedge Replacement Grammars (Drewes et al., 1997, HRGs). These grammars assume the transduction problem can be split into rules that map portions of a graph to a set of tokens in the output sequence. In particular, Chiang et al. (2013) defines a parsing algorithm, followed by a complexity analysis, while Jones et al. (2012) report experiments on semantic-based machine translation using HRGs. HRGs were also used in previous work on AMR parsing (Peng et al., 2015). The main drawback of these grammar-based approaches though is the need for alignments between graph nodes and surface tokens, which are usually not available in gold-standard form. Neural networks for graphs Recurrent networks on general graphs were first proposed un281 der the name Graph Neural Networks (Gori et al., 2005; Scarselli et al., 2009). Our work is based on the architecture proposed by Li et al. (2016), which add gating mechanisms. The main difference between their work and ours is that they focus on problems that concern the input graph itself such as node classification or path finding while we focus on generating strings. The main alternative for neural-based graph representations is Graph Convolutional Networks (Bruna et al., 2014; Duvenaud et al., 2015; Kipf and Welling, 2017), which have been applied in a range of problems. In NLP, Marcheggiani and Titov (2017) use a similar architecture for Semantic Role Labelling. They use heuristics to mitigate the parameter explosion by grouping edge labels, while we keep the original labels through our Levi graph transformation. An interesting alternative is proposed by Schlichtkrull et al. (2017), which uses tensor factorisation to reduce the number of parameters. Applications Early work on AMR generation employs grammars and transducers (Flanigan et al., 2016; Song et al., 2017). Linearisation approaches include (Pourdamghani et al., 2016) and (Konstas et al., 2017), which showed that graph simplification and anonymisation are key to good performance, a procedure we also employ in our work. However, compared to our approach, linearisation incurs in loss of information. MT has a long history of previous work that aims at incorporating syntax (Wu, 1997; Yamada and Knight, 2001; Galley et al., 2004; Liu et al., 2006, inter alia). This idea has also been investigated in the context of NMT. Bastings et al. (2017) is the most similar work to ours, and we benchmark against their approach in our NMT experiments. Eriguchi et al. (2016) also employs source syntax, but using constituency trees instead. Other approaches have investigated the use of syntax in the target language (Aharoni and Goldberg, 2017; Eriguchi et al., 2017). Finally, Hashimoto and Tsuruoka (2017) treats source syntax as a latent variable, which can be pretrained using annotated data. 7 Discussion and Conclusion We proposed a novel encoder-decoder architecture for graph-to-sequence learning, outperforming baselines in two NLP tasks: generation from AMR graphs and syntax-based NMT. Our approach addresses shortcomings from previous work, including loss of information from linearisation and parameter explosion. In particular, we showed how graph transformations can solve issues with graph-based networks without changing the underlying architecture. This is the case of the proposed Levi graph transformation, which ensures the decoder can attend to edges as well as nodes, but also to the sequential connections added to the dependency trees in the case of NMT. Overall, because our architecture can work with general graphs, it is straightforward to add linguistic biases in the form of extra node and/or edge information. We believe this is an interesting research direction in terms of applications. Our architecture nevertheless has two major limitations. The first one is that GGNNs have a fixed number of layers, even though graphs can vary in size in terms of number of nodes and edges. A better approach would be to allow the encoder to have a dynamic number of layers, possibly based on the diameter (longest path) in the input graph. The second limitation comes from the Levi graph transformation: because edge labels are represented as nodes they end up sharing the vocabulary and therefore, the same semantic space. This is not ideal, as nodes and edges are different entities. An interesting alternative is Weave Module Networks (Kearnes et al., 2016), which explicitly decouples node and edge representations without incurring in parameter explosion. Incorporating both ideas to our architecture is an research direction we plan for future work. Acknowledgements This work was supported by the Australian Research Council (DP160102686). The research reported in this paper was partly conducted at the 2017 Frederick Jelinek Memorial Summer Workshop on Speech and Language Technologies, hosted at Carnegie Mellon University and sponsored by Johns Hopkins University with unrestricted gifts from Amazon, Apple, Facebook, Google, and Microsoft. The authors would also like to thank Joost Bastings for sharing the data from his paper’s experiments. References Roee Aharoni and Yoav Goldberg. 2017. Towards String-to-Tree Neural Machine Translation. In Proceedings of ACL. pages 132–140. Laura Banarescu, Claire Bonial, Shu Cai, Madalina Georgescu, Kira Griffitt, Ulf Hermjakob, Kevin 282 Knight, Philipp Koehn, Martha Palmer, and Nathan Schneider. 2013. Abstract Meaning Representation for Sembanking. In Proceedings of the 7th Linguistic Annotation Workshop and Interoperability with Discourse. pages 178–186. Joost Bastings, Ivan Titov, Wilker Aziz, Diego Marcheggiani, and Khalil Sima’an. 2017. Graph Convolutional Encoders for Syntax-aware Neural Machine Translation. In Proceedings of EMNLP. pages 1947–1957. Ondej Bojar, Yvette Graham, and Amir Kamran. 2017. Results of the WMT17 Metrics Shared Task. In Proceedings of WMT. volume 2, pages 293–301. Joan Bruna, Wojciech Zaremba, Arthur Szlam, and Yann LeCun. 2014. Spectral Networks and Locally Connected Networks on Graphs. In Proceedings of ICLR. page 14. Tianqi Chen, Mu Li, Yutian Li, Min Lin, Naiyan Wang, Minjie Wang, Tianjun Xiao, Bing Xu, Chiyuan Zhang, and Zheng Zhang. 2015. MXNet: A Flexible and Efficient Machine Learning Library for Heterogeneous Distributed Systems. In Proceedings of the Workshop on Machine Learning Systems. pages 1–6. David Chiang, Jacob Andreas, Daniel Bauer, Karl Moritz Hermann, Bevan Jones, and Kevin Knight. 2013. Parsing Graphs with Hyperedge Replacement Grammars. In Proceedings of ACL. pages 924–932. Kyunghyun Cho, Bart van Merrienboer, Caglar Gulcehre, Dzmitry Bahdanau, Fethi Bougares, Holger Schwenk, and Yoshua Bengio. 2014. Learning Phrase Representations using RNN EncoderDecoder for Statistical Machine Translation. In Proceedings of EMNLP. pages 1724–1734. Frank Drewes, Hans J¨org Kreowski, and Annegret Habel. 1997. Hyperedge Replacement Graph Grammars. Handbook of Graph Grammars and Computing by Graph Transformation . David Duvenaud, Dougal Maclaurin, Jorge AguileraIparraguirre, Rafael G´omez-Bombarelli, Timothy Hirzel, Al´an Aspuru-Guzik, and Ryan P Adams. 2015. Convolutional Networks on Graphs for Learning Molecular Fingerprints. In Proceedings of NIPS. pages 2215–2223. Akiko Eriguchi, Kazuma Hashimoto, and Yoshimasa Tsuruoka. 2016. Tree-to-Sequence Attentional Neural Machine Translation. In Proceedings of ACL. Akiko Eriguchi, Yoshimasa Tsuruoka, and Kyunghyun Cho. 2017. Learning to Parse and Translate Improves Neural Machine Translation. In Proceedings of ACL. Jeffrey Flanigan, Chris Dyer, Noah A. Smith, and Jaime Carbonell. 2016. Generation from Abstract Meaning Representation using Tree Transducers. In Proceedings of NAACL. pages 731–739. Michel Galley, Mark Hopkins, Kevin Knight, and Daniel Marcu. 2004. What’s in a translation rule? In Proceedings of NAACL. pages 273–280. Jonas Gehring, Michael Auli, David Grangier, Denis Yarats, and Yann N. Dauphin. 2017. Convolutional Sequence to Sequence Learning. arXiv preprint . Marco Gori, Gabriele Monfardini, and Franco Scarselli. 2005. A New Model for Learning in Graph Domains. In Proceedings of IJCNN. volume 2, pages 729–734. Yvette Graham, Nitika Mathur, and Timothy Baldwin. 2014. Randomized Significance Tests in Machine Translation. In Proceedings of WMT. pages 266– 274. Jonathan Gross and Jay Yellen, editors. 2004. Handbook of Graph Theory. CRC Press. Kazuma Hashimoto and Yoshimasa Tsuruoka. 2017. Neural Machine Translation with Source-Side Latent Graph Parsing. In Proceedings of EMNLP. pages 125–135. Felix Hieber, Tobias Domhan, Michael Denkowski, David Vilar, Artem Sokolov, Ann Clifton, and Matt Post. 2017. Sockeye: A Toolkit for Neural Machine Translation. arXiv preprint pages 1–18. Sepp Hochreiter and J¨urgen Schmidhuber. 1997. Long Short-Term Memory. Neural Computation 9(8):1735–1780. Bevan Jones, Jacob Andreas, Daniel Bauer, Karl Moritz Hermann, and Kevin Knight. 2012. Semantics-Based Machine Translation with Hyperedge Replacement Grammars. In Proceedings of COLING. pages 1359–1376. Steven Kearnes, Kevin McCloskey, Marc Berndl, Vijay Pande, and Patrick Riley. 2016. Molecular Graph Convolutions: Moving Beyond Fingerprints. Journal of Computer-Aided Molecular Design 30(8):595–608. Nitish Shirish Keskar, Dheevatsa Mudigere, Jorge Nocedal, Mikhail Smelyanskiy, and Ping Tak Peter Tang. 2017. On Large-Batch Training for Deep Learning: Generalization Gap and Sharp Minima. In Proceedings of ICLR. pages 1–16. Diederik P. Kingma and Jimmy Ba. 2015. Adam: A Method for Stochastic Optimization. In Proceedings of ICLR. pages 1–15. Thomas N. Kipf and Max Welling. 2017. SemiSupervised Classification with Graph Convolutional Networks. In Proceedings of ICLR. Philipp Koehn, Hieu Hoang, Alexandra Birch, Chris Callison-Burch, Marcello Federico, Nicola Bertoldi, Brooke Cowan, Wade Shen, Christine Moran, Richard Zens, Chris Dyer, Ondrej Bojar, Alexandra Constantin, and Evan Herbst. 2007. Moses: Open source toolkit for statistical machine translation. In Proceedings of ACL Demo Session. pages 177–180. 283 Ioannis Konstas, Srinivasan Iyer, Mark Yatskar, Yejin Choi, and Luke Zettlemoyer. 2017. Neural AMR: Sequence-to-Sequence Models for Parsing and Generation. In Proceedings of ACL. pages 146–157. Friedrich Wilhelm Levi. 1942. Finite Geometrical Systems. Yujia Li, Daniel Tarlow, Marc Brockschmidt, and Richard Zemel. 2016. Gated Graph Sequence Neural Networks. In Proceedings of ICLR. 1, pages 1– 20. Yang Liu, Qun Liu, and Shouxun Lin. 2006. Treeto-string alignment template for statistical machine translation. In Proceedings of the 21st International Conference on Computational Linguistics and the 44th annual meeting of the ACL - ACL ’06. pages 609–616. Minh-Thang Luong, Hieu Pham, and Christopher D. Manning. 2015. Effective Approaches to Attentionbased Neural Machine Translation. In Proceedings of EMNLP. pages 1412–1421. Diego Marcheggiani and Ivan Titov. 2017. Encoding Sentences with Graph Convolutional Networks for Semantic Role Labeling. In Proceedings of EMNLP. Franz Josef Och and Hermann Ney. 2002. Discriminative training and maximum entropy models for statistical machine translation. In Proceedings of the 40th Annual Meeting on Association for Computational Linguistics - ACL ’02. page 295. https://doi.org/10.3115/1073083.1073133. Kishore Papineni, Salim Roukos, Todd Ward, and WeiJing Zhu. 2001. Bleu: a method for automatic evaluation of machine translation. In Proceedings of ACL. pages 311–318. Xiaochang Peng, Linfeng Song, and Daniel Gildea. 2015. A Synchronous Hyperedge Replacement Grammar based approach for AMR parsing. In Proceedings of CoNLL. pages 32–41. Maja Popovi´c. 2017. chrF ++: words helping character n-grams. In Proceedings of WMT. pages 612–618. Nima Pourdamghani, Kevin Knight, and Ulf Hermjakob. 2016. Generating English from Abstract Meaning Representations. In Proceedings of INLG. volume 0, pages 21–25. Nils Reimers and Iryna Gurevych. 2017. Reporting Score Distributions Makes a Difference: Performance Study of LSTM-networks for Sequence Tagging. In Proceedings of EMNLP. pages 338–348. Franco Scarselli, Marco Gori, Ah Ching Tsoi, and Gabriele Monfardini. 2009. The Graph Neural Network Model. IEEE Transactions on Neural Networks 20(1):61–80. Michael Schlichtkrull, Thomas N. Kipf, Peter Bloem, Rianne van den Berg, Ivan Titov, and Max Welling. 2017. Modeling Relational Data with Graph Convolutional Networks pages 1–12. Rico Sennrich, Barry Haddow, and Alexandra Birch. 2016. Neural Machine Translation of Rare Words with Subword Units. In Proceedings of ACL. pages 1715–1725. Linfeng Song, Xiaochang Peng, Yue Zhang, Zhiguo Wang, and Daniel Gildea. 2017. AMR-to-text Generation with Synchronous Node Replacement Grammar. In Proceedings of ACL. pages 7–13. Nitish Srivastava, Geoffrey Hinton, Alex Krizhevsky, Ilya Sutskever, and Ruslan Salakhutdinov. 2014. Dropout: A Simple Way to Prevent Neural Networks from Overfitting. Journal of Machine Learning Research 15:1929–1958. Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N. Gomez, Lukasz Kaiser, and Illia Polosukhin. 2017. Attention Is All You Need. In Proceedings of NIPS. Dekai Wu. 1997. Stochastic inversion transduction grammars and bilingual parsing of parallel corpora. Computational Linguistics 23(3):377–403. Kenji Yamada and Kevin Knight. 2001. A Syntaxbased Statistical Translation Model. In Proceedings of ACL. pages 523–530.
2018
26
Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Long Papers), pages 284–294 Melbourne, Australia, July 15 - 20, 2018. c⃝2018 Association for Computational Linguistics 284 Sharp Nearby, Fuzzy Far Away: How Neural Language Models Use Context Urvashi Khandelwal, He He, Peng Qi, Dan Jurafsky Computer Science Department Stanford University {urvashik,hehe,pengqi,jurafsky}@stanford.edu Abstract We know very little about how neural language models (LM) use prior linguistic context. In this paper, we investigate the role of context in an LSTM LM, through ablation studies. Specifically, we analyze the increase in perplexity when prior context words are shuffled, replaced, or dropped. On two standard datasets, Penn Treebank and WikiText-2, we find that the model is capable of using about 200 tokens of context on average, but sharply distinguishes nearby context (recent 50 tokens) from the distant history. The model is highly sensitive to the order of words within the most recent sentence, but ignores word order in the long-range context (beyond 50 tokens), suggesting the distant past is modeled only as a rough semantic field or topic. We further find that the neural caching model (Grave et al., 2017b) especially helps the LSTM to copy words from within this distant context. Overall, our analysis not only provides a better understanding of how neural LMs use their context, but also sheds light on recent success from cache-based models. 1 Introduction Language models are an important component of natural language generation tasks, such as machine translation and summarization. They use context (a sequence of words) to estimate a probability distribution of the upcoming word. For several years now, neural language models (NLMs) (Graves, 2013; Jozefowicz et al., 2016; Grave et al., 2017a; Dauphin et al., 2017; Melis et al., 2018; Yang et al., 2018) have consistently outperformed classical n-gram models, an improvement often attributed to their ability to model long-range dependencies in faraway context. Yet, how these NLMs use the context is largely unexplained. Recent studies have begun to shed light on the information encoded by Long Short-Term Memory (LSTM) networks. They can remember sentence lengths, word identity, and word order (Adi et al., 2017), can capture some syntactic structures such as subject-verb agreement (Linzen et al., 2016), and can model certain kinds of semantic compositionality such as negation and intensification (Li et al., 2016). However, all of the previous work studies LSTMs at the sentence level, even though they can potentially encode longer context. Our goal is to complement the prior work to provide a richer understanding of the role of context, in particular, long-range context beyond a sentence. We aim to answer the following questions: (i) How much context is used by NLMs, in terms of the number of tokens? (ii) Within this range, are nearby and long-range contexts represented differently? (iii) How do copy mechanisms help the model use different regions of context? We investigate these questions via ablation studies on a standard LSTM language model (Merity et al., 2018) on two benchmark language modeling datasets: Penn Treebank and WikiText-2. Given a pretrained language model, we perturb the prior context in various ways at test time, to study how much the perturbed information affects model performance. Specifically, we alter the context length to study how many tokens are used, permute tokens to see if LSTMs care about word order in both local and global contexts, and drop and replace target words to test the copying abilities of LSTMs with and without an external copy mechanism, such as the neural cache (Grave et al., 2017b). The cache operates by first recording tar285 get words and their context representations seen in the history, and then encouraging the model to copy a word from the past when the current context representation matches that word’s recorded context vector. We find that the LSTM is capable of using about 200 tokens of context on average, with no observable differences from changing the hyperparameter settings. Within this context range, word order is only relevant within the 20 most recent tokens or about a sentence. In the long-range context, order has almost no effect on performance, suggesting that the model maintains a high-level, rough semantic representation of faraway words. Finally, we find that LSTMs can regenerate some words seen in the nearby context, but heavily rely on the cache to help them copy words from the long-range context. 2 Language Modeling Language models assign probabilities to sequences of words. In practice, the probability can be factorized using the chain rule P(w1, . . . , wt) = tY i=1 P(wi|wi−1, . . . , w1), and language models compute the conditional probability of a target word wt given its preceding context, w1, . . . , wt−1. Language models are trained to minimize the negative log likelihood of the training corpus: NLL = −1 T T X t=1 log P(wt|wt−1, . . . , w1), and the model’s performance is usually evaluated by perplexity (PP) on a held-out set: PP = exp(NLL). When testing the effect of ablations, we focus on comparing differences in the language model’s losses (NLL) on the dev set, which is equivalent to relative improvements in perplexity. 3 Approach Our goal is to investigate the effect of contextual features such as the length of context, word order and more, on LSTM performance. Thus, we use ablation analysis, during evaluation, to measure changes in model performance in the absence of certain contextual information. PTB Wiki Dev Test Dev Test # Tokens 73,760 82,430 217,646 245,569 Perplexity (no cache) 59.07 56.89 67.29 64.51 Avg. Sent. Len. 20.9 20.9 23.7 22.6 Table 1: Dataset statistics and performance relevant to our experiments. Typically, when testing the language model on a held-out sequence of words, all tokens prior to the target word are fed to the model; we call this the infinite-context setting. In this study, we observe the change in perplexity or NLL when the model is fed a perturbed context δ(wt−1, . . . , w1), at test time. δ refers to the perturbation function, and we experiment with perturbations such as dropping tokens, shuffling/reversing tokens, and replacing tokens with other words from the vocabulary.1 It is important to note that we do not train the model with these perturbations. This is because the aim is to start with an LSTM that has been trained in the standard fashion, and discover how much context it uses and which features in nearby vs. long-range context are important. Hence, the mismatch in training and test is a necessary part of experiment design, and all measured losses are upper bounds which would likely be lower, were the model also trained to handle such perturbations. We use a standard LSTM language model, trained and finetuned using the Averaging SGD optimizer (Merity et al., 2018).2 We also augment the model with a cache only for Section 6.2, in order to investigate why an external copy mechanism is helpful. A short description of the architecture and a detailed list of hyperparameters is listed in Appendix A, and we refer the reader to the original paper for additional details. We analyze two datasets commonly used for language modeling, Penn Treebank (PTB) (Marcus et al., 1993; Mikolov et al., 2010) and Wikitext-2 (Wiki) (Merity et al., 2017). PTB consists of Wall Street Journal news articles with 0.9M tokens for training and a 10K vocabulary. Wiki is a larger and more diverse dataset, containing Wikipedia articles across many topics with 2.1M tokens for training and a 33K vocabulary. Additional dataset statistics are provided in Ta1Code for our experiments available at https:// github.com/urvashik/lm-context-analysis 2Public release of their code at https://github. com/salesforce/awd-lstm-lm 286 ble 1. In this paper, we present results only on the dev sets, in order to avoid revealing details about the test sets. However, we have confirmed that all results are consistent with those on the test sets. In addition, for all experiments we report averaged results from three models trained with different random seeds. Some of the figures provided contain trends from only one of the two datasets and the corresponding figures for the other dataset are provided in Appendix B. 4 How much context is used? LSTMs are designed to capture long-range dependencies in sequences (Hochreiter and Schmidhuber, 1997). In practice, LSTM language models are provided an infinite amount of prior context, which is as long as the test sequence goes. However, it is unclear how much of this history has a direct impact on model performance. In this section, we investigate how many tokens of context achieve a similar loss (or 1-2% difference in model perplexity) to providing the model infinite context. We consider this the effective context size. LSTM language models have an effective context size of about 200 tokens on average. We determine the effective context size by varying the number of tokens fed to the model. In particular, at test time, we feed the model the most recent n tokens: δtruncate(wt−1, . . . , w1) = (wt−1, . . . , wt−n), (1) where n > 0 and all tokens farther away from the target wt are dropped.3 We compare the dev loss (NLL) from truncated context, to that of the infinite-context setting where all previous words are fed to the model. The resulting increase in loss indicates how important the dropped tokens are for the model. Figure 1a shows that the difference in dev loss, between truncated- and infinite-context variants of the test setting, gradually diminishes as we increase n from 5 tokens to 1000 tokens. In particular, we only see a 1% increase in perplexity as we move beyond a context of 150 tokens on PTB and 250 tokens on Wiki. Hence, we provide empirical evidence to show that LSTM language models do, in fact, model long-range dependencies, without help from extra context vectors or caches. 3Words at the beginning of the test sequence with fewer than n tokens in the context are ignored for loss computation. Changing hyperparameters does not change the effective context size. NLM performance has been shown to be sensitive to hyperparameters such as the dropout rate and model size (Melis et al., 2018). To investigate if these hyperparameters affect the effective context size as well, we train separate models by varying the following hyperparameters one at a time: (1) number of timesteps for truncated back-propogation (2) dropout rate, (3) model size (hidden state size, number of layers, and word embedding size). In Figure 1b, we show that while different hyperparameter settings result in different perplexities in the infinite-context setting, the trend of how perplexity changes as we reduce the context size remains the same. 4.1 Do different types of words need different amounts of context? The effective context size determined in the previous section is aggregated over the entire corpus, which ignores the type of the upcoming word. Boyd-Graber and Blei (2009) have previously investigated the differences in context used by different types of words and found that function words rely on less context than content words. We investigate whether the effective context size varies across different types of words, by categorizing them based on either frequency or parts-ofspeech. Specifically, we vary the number of context tokens in the same way as the previous section, and aggregate loss over words within each class separately. Infrequent words need more context than frequent words. We categorize words that appear at least 800 times in the training set as frequent, and the rest as infrequent. Figure 1c shows that the loss of frequent words is insensitive to missing context beyond the 50 most recent tokens, which holds across the two datasets. Infrequent words, on the other hand, require more than 200 tokens. Content words need more context than function words. Given the parts-of-speech of each word, we define content words as nouns, verbs and adjectives, and function words as prepositions and determiners.4 Figure 1d shows that the loss of nouns and verbs is affected by distant context, whereas when the target word is a determiner, the model only relies on words within the last 10 tokens. 4We obtain part-of-speech tags using Stanford CoreNLP (Manning et al., 2014). 287 (a) Varying context size. (b) Changing model hyperparameters. (c) Frequent vs. infrequent words. (d) Different parts-of-speech. Figure 1: Effects of varying the number of tokens provided in the context, as compared to the same model provided with infinite context. Increase in loss represents an absolute increase in NLL over the entire corpus, due to restricted context. All curves are averaged over three random seeds, and error bars represent the standard deviation. (a) The model has an effective context size of 150 on PTB and 250 on Wiki. (b) Changing model hyperparameters does not change the context usage trend, but does change model performance. We report perplexities to highlight the consistent trend. (c) Infrequent words need more context than frequent words. (d) Content words need more context than function words. Discussion. Overall, we find that the model’s effective context size is dynamic. It depends on the target word, which is consistent with what we know about language, e.g., determiners require less context than nouns (Boyd-Graber and Blei, 2009). In addition, these findings are consistent with those previously reported for different language models and datasets (Hill et al., 2016; Wang and Cho, 2016). 5 Nearby vs. long-range context An effective context size of 200 tokens allows for representing linguistic information at many levels of abstraction, such as words, sentences, topics, etc. In this section, we investigate the importance of contextual information such as word order and word identity. Unlike prior work that studies LSTM embeddings at the sentence level, we look at both nearby and faraway context, and analyze how the language model treats contextual information presented in different regions of the context. 5.1 Does word order matter? Adi et al. (2017) have shown that LSTMs are aware of word order within a sentence. We investigate whether LSTM language models are sensitive to word order within a larger context window. To determine the range in which word order affects model performance, we permute substrings in the context to observe their effect on dev loss compared to the unperturbed baseline. In particular, we perturb the context as follows, δpermute(wt−1, . . . , wt−n) = (wt−1, .., ⇢(wt−s1−1, .., wt−s2), .., wt−n) (2) where ⇢2 {shu✏e, reverse} and (s1, s2] denotes the range of the substring to be permuted. We refer to this substring as the permutable span. For 288 (a) Perturb order locally, within 20 tokens of each point. (b) Perturb global order, i.e. all tokens in the context before a given point, in Wiki. Figure 2: Effects of shuffling and reversing the order of words in 300 tokens of context, relative to an unperturbed baseline. All curves are averages from three random seeds, where error bars represent the standard deviation. (a) Changing the order of words within a 20-token window has negligible effect on the loss after the first 20 tokens. (b) Changing the global order of words within the context does not affect loss beyond 50 tokens. the following analysis, we distinguish local word order, within 20-token permutable spans which are the length of an average sentence, from global word order, which extends beyond local spans to include all the farthest tokens in the history. We consider selecting permutable spans within a context of n = 300 tokens, which is greater than the effective context size. Local word order only matters for the most recent 20 tokens. We can locate the region of context beyond which the local word order has no relevance, by permuting word order locally at various points within the context. We accomplish this by varying s1 and setting s2 = s1 + 20. Figure 2a shows that local word order matters very much within the most recent 20 tokens, and far less beyond that. Global order of words only matters for the most recent 50 tokens. Similar to the local word order experiment, we locate the point beyond which the general location of words within the context is irrelevant, by permuting global word order. We achieve this by varying s1 and fixing s2 = n. Figure 2b demonstrates that after 50 tokens, shuffling or reversing the remaining words in the context has no effect on the model performance. In order to determine whether this is due to insensitivity to word order or whether the language model is simply not sensitive to any changes in the long-range context, we further replace words in the permutable span with a randomly sampled sequence of the same length from the training set. The gap between the permutation and replacement curves in Figure 2b illustrates that the identity of words in the far away context is still relevant, and only the order of the words is not. Discussion. These results suggest that word order matters only within the most recent sentence, beyond which the order of sentences matters for 2-3 sentences (determined by our experiments on global word order). After 50 tokens, word order has almost no effect, but the identity of those words is still relevant, suggesting a high-level, rough semantic representation for these faraway words. In light of these observations, we define 50 tokens as the boundary between nearby and longrange context, for the rest of this study. Next, we investigate the importance of different word types in the different regions of context. 5.2 Types of words and the region of context Open-class or content words such as nouns, verbs, adjectives and adverbs, contribute more to the semantic context of natural language than function words such as determiners and prepositions. Given our observation that the language model represents long-range context as a rough semantic representation, a natural question to ask is how important are function words in the long-range 289 Figure 3: Effect of dropping content and function words from 300 tokens of context relative to an unperturbed baseline, on PTB. Error bars represent 95% confidence intervals. Dropping both content and function words 5 tokens away from the target results in a nontrivial increase in loss, whereas beyond 20 tokens, only content words are relevant. context? Below, we study the effect of these two classes of words on the model’s performance. Function words are defined as all words that are not nouns, verbs, adjectives or adverbs. Content words matter more than function words. To study the effect of content and function words on model perplexity, we drop them from different regions of the context and compare the resulting change in loss. Specifically, we perturb the context as follows, δdrop(wt−1, . . . , wt−n) = (wt−1, .., wt−s1, fpos(y, (wt−s1−1, .., wt−n))) (3) where fpos(y, span) is a function that drops all words with POS tag y in a given span. s1 denotes the starting offset of the perturbed subsequence. For these experiments, we set s1 2 {5, 20, 100}. On average, there are slightly more content words than function words in any given text. As shown in Section 4, dropping more words results in higher loss. To eliminate the effect of dropping different fractions of words, for each experiment where we drop a specific word type, we add a control experiment where the same number of tokens are sampled randomly from the context, and dropped. Figure 3 shows that dropping content words as close as 5 tokens from the target word increases model perplexity by about 65%, whereas dropping the same proportion of tokens at random, results in a much smaller 17% increase. Dropping all function words, on the other hand, is not very different from dropping the same proportion of words at random, but still increases loss by about 15%. This suggests that within the most recent sentence, content words are extremely important but function words are also relevant since they help maintain grammaticality and syntactic structure. On the other hand, beyond a sentence, only content words have a sizeable influence on model performance. 6 To cache or not to cache? As shown in Section 5.1, LSTM language models use a high-level, rough semantic representation for long-range context, suggesting that they might not be using information from any specific words located far away. Adi et al. (2017) have also shown that while LSTMs are aware of which words appear in their context, this awareness degrades with increasing length of the sequence. However, the success of copy mechanisms such as attention and caching (Bahdanau et al., 2015; Hill et al., 2016; Merity et al., 2017; Grave et al., 2017a,b) suggests that information in the distant context is very useful. Given this fact, can LSTMs copy any words from context without relying on external copy mechanisms? Do they copy words from nearby and long-range context equally? How does the caching model help? In this section, we investigate these questions by studying how LSTMs copy words from different regions of context. More specifically, we look at two regions of context, nearby (within 50 most recent tokens) and longrange (beyond 50 tokens), and study three categories of target words: those that can be copied from nearby context (Cnear), those that can only be copied from long-range context (Cfar), and those that cannot be copied at all given a limited context (Cnone). 6.1 Can LSTMs copy words without caches? Even without a cache, LSTMs often regenerate words that have already appeared in prior context. We investigate how much the model relies on the previous occurrences of the upcoming target word, by analyzing the change in loss after dropping and replacing this target word in the context. LSTMs can regenerate words seen in nearby context. In order to demonstrate the usefulness 290 (a) Dropping tokens (b) Perturbing occurrences of target word in context. Figure 4: Effects of perturbing the target word in the context compared to dropping long-range context altogether, on PTB. Error bars represent 95% confidence intervals. (a) Words that can only be copied from long-range context are more sensitive to dropping all the distant words than to dropping the target. For words that can be copied from nearby context, dropping only the target has a much larger effect on loss compared to dropping the long-range context. (b) Replacing the target word with other tokens from vocabulary hurts more than dropping it from the context, for words that can be copied from nearby context, but has no effect on words that can only be copied from far away. of target word occurrences in context, we experiment with dropping all the distant context versus dropping only occurrences of the target word from the context. In particular, we compare removing all tokens after the 50 most recent tokens, (Equation 1 with n = 50), versus removing only the target word, in context of size n = 300: δdrop(wt−1, . . . , wt−n) = fword(wt, (wt−1, . . . , wt−n)), (4) where fword(w, span) drops words equal to w in a given span. We compare applying both perturbations to a baseline model with unperturbed context restricted to n = 300. We also include the target words that never appear in the context (Cnone) as a control set for this experiment. The results show that LSTMs rely on the rough semantic representation of the faraway context to generate Cfar, but direclty copy Cnear from the nearby context. In Figure 4a, the long-range context bars show that for words that can only be copied from long-range context (Cfar), removing all distant context is far more disruptive than removing only occurrences of the target word (12% and 2% increase in perplexity, respectively). This suggests that the model relies more on the rough semantic representation of faraway context to predict these Cfar tokens, rather than directly copying them from the distant context. On the other hand, for words that can be copied from nearby context (Cnear), removing all long-range context has a smaller effect (about 3.5% increase in perplexity) as seen in Figure 4a, compared to removing the target word which increases perplexity by almost 9%. This suggests that these Cnear tokens are more often copied from nearby context, than inferred from information found in the rough semantic representation of long-range context. However, is it possible that dropping the target tokens altogether, hurts the model too much by adversely affecting grammaticality of the context? We test this theory by replacing target words in the context with other words from the vocabulary. This perturbation is similar to Equation 4, except instead of dropping the token, we replace it with a different one. In particular, we experiment with replacing the target with <unk>, to see if having the generic word is better than not having any word. We also replace it with a word that has the same part-of-speech tag and a similar frequency in the dataset, to observe how much this change confuses the model. Figure 4b shows that replacing the target with other words results in up to a 14% increase in perplexity for Cnear, which suggests that the replacement token seems to confuse the model far more than when the token is simply dropped. However, the words that rely on the long-range context, Cfar, are largely unaffected by these changes, which confirms our conclusion from dropping the target tokens: Cfar 291 witnesses in the morris film </s> served up as a solo however the music lacks the UNK provided by a context within another medium </s> UNK of mr. glass may agree with the critic richard UNK 's sense that the NUM music in twelve parts is as UNK and UNK as the UNK UNK </s> but while making the obvious point that both UNK develop variations from themes this comparison UNK the intensely UNK nature of mr. glass </s> snack-food UNK increased a strong NUM NUM in the third quarter while domestic profit increased in double UNK mr. calloway said </s> excluding the british snack-food business acquired in july snack-food international UNK jumped NUM NUM with sales strong in spain mexico and brazil </s> total snack-food profit rose NUM NUM </s> led by pizza hut and UNK bell restaurant earnings increased about NUM NUM in the third quarter on a NUM NUM sales increase </s> UNK sales for pizza hut rose about NUM NUM while UNK bell 's increased NUM NUM as the chain continues to benefit from its UNK strategy </s> UNK bell has turned around declining customer counts by permanently lowering the price of its UNK </s> same UNK for kentucky fried chicken which has struggled with increased competition in the fast-food chicken market and a lack of new products rose only NUM NUM </s> the operation which has been slow to respond to consumers ' shifting UNK away from fried foods has been developing a UNK product that may be introduced nationally at the end of next year </s> the new product has performed well in a market test in las vegas nev. mr. calloway send a delegation of congressional staffers to poland to assist its legislature the UNK in democratic procedures </s> senator pete UNK calls this effort the first gift of democracy </s> the poles might do better to view it as a UNK horse </s> it is the vast shadow government of NUM congressional staffers that helps create such legislative UNK as the NUM page UNK reconciliation bill that claimed to be the budget of the united states </s> maybe after the staffers explain their work to the poles they 'd be willing to come back and do the same for the american people </s> UNK UNK plc a financially troubled irish maker of fine crystal and UNK china reported that its pretax loss for the first six months widened to NUM million irish punts $ NUM million from NUM million irish punts a year earlier </s> the results for the half were worse than market expectations which suggested an interim loss of around NUM million irish punts </s> in a sharply weaker london market yesterday UNK shares were down NUM pence at NUM pence NUM cents </s> the company reported a loss after taxation and minority interests of NUM million irish sim has set a fresh target of $ NUM a share by the end of </s> reaching that goal says robert t. UNK applied 's chief financial officer will require efficient reinvestment of cash by applied and UNK of its healthy NUM NUM rate of return on operating capital </s> in barry wright mr. sim sees a situation very similar to the one he faced when he joined applied as president and chief operating officer in NUM </s> applied then a closely held company was UNK under the management of its controlling family </s> while profitable it was n't growing and was n't providing a satisfactory return on invested capital he says </s> mr. sim is confident that the drive to dominate certain niche markets will work at barry wright as it has at applied </s> he also UNK an UNK UNK to develop a corporate culture that rewards managers who produce and where UNK is shared </s> mr. sim considers the new unit 's operations fundamentally sound and adds that barry wright has been fairly successful in moving into markets that have n't interested larger competitors </s> with a little patience these businesses will perform very UNK mr. sim was openly sympathetic to swapo </s> shortly after that mr. UNK had scott stanley arrested and his UNK confiscated </s> mr. stanley is on trial over charges that he violated a UNK issued by the south african administrator general earlier this year which made it a crime punishable by two years in prison for any person to UNK UNK or UNK the election commission </s> the stanley affair does n't UNK well for the future of democracy or freedom of anything in namibia when swapo starts running the government </s> to the extent mr. stanley has done anything wrong it may be that he is out of step with the consensus of world intellectuals that the UNK guerrillas were above all else the victims of UNK by neighboring south africa </s> swapo has enjoyed favorable western media treatment ever since the u.n. general assembly declared it the sole UNK representative of namibia 's people in </s> last year the u.s. UNK a peace settlement to remove cuba 's UNK UNK from UNK and hold free and fair elections that would end south africa 's control of namibia </s> the elections are set for nov. NUM </s> in july mr. stanley july snack-food international UNK jumped NUM NUM with sales strong in spain mexico and brazil </s> total snack-food profit rose NUM NUM </s> led by pizza hut and UNK bell restaurant earnings increased about NUM NUM in the third quarter on a NUM NUM sales increase </s> UNK sales for pizza hut rose about NUM NUM while UNK bell 's increased NUM NUM as the chain continues to benefit from its UNK strategy </s> UNK bell has turned around declining customer counts by permanently lowering the price of its UNK </s> same UNK for kentucky fried chicken which has struggled with increased competition in the fast-food chicken market and a lack of new products rose only NUM NUM </s> the operation which has been slow to respond to consumers ' shifting UNK away from fried foods has been developing a UNK product that may be introduced nationally at the end of next year </s> the new product has performed well in a market test in las vegas nev. mr. calloway said </s> after a four-year $ NUM billion acquisition binge that brought a major soft-drink company soda UNK a fast-food chain and an overseas snack-food giant to pepsi mr. calloway of london 's securities traders it was a day that started nervously in the small hours </s> by UNK the selling was at UNK fever </s> but as the day ended in a UNK wall UNK rally the city UNK a sigh of relief </s> so it went yesterday in the trading rooms of london 's financial district </s> in the wake of wall street 's plunge last friday the london market was considered especially vulnerable </s> and before the opening of trading here yesterday all eyes were on early trading in tokyo for a clue as to how widespread the fallout Figure 5: Success of neural cache on PTB. Brightly shaded region shows peaky distribution. management equity participation </s> further many institutions today holding troubled retailers ' debt securities will be UNK to consider additional retailing investments </s> it 's called bad money driving out good money said one retailing UNK </s> institutions that usually buy retail paper have to be more concerned </s> however the lower prices these retail chains are now expected to bring should make it easier for managers to raise the necessary capital and pay back the resulting debt </s> in addition the fall selling season has generally been a good one especially for those retailers dependent on apparel sales for the majority of their revenues </s> what 's encouraging about this is that retail chains will be sold on the basis of their sales and earnings not liquidation values said joseph e. brooks chairman and chief offerings outside the u.s. </s> goldman sachs & co. will manage the offering </s> macmillan said berlitz intends to pay quarterly dividends on the stock </s> the company said it expects to pay the first dividend of NUM cents a share in the NUM first quarter </s> berlitz will borrow an amount equal to its expected net proceeds from the offerings plus $ NUM million in connection with a credit agreement with lenders </s> the total borrowing will be about $ NUM million the company said </s> proceeds from the borrowings under the credit agreement will be used to pay an $ NUM million cash dividend to macmillan and to lend the remainder of about $ NUM million to maxwell communications in connection with a UNK note </s> proceeds from the offering will be used to repay borrowings under the short-term parts of a credit agreement </s> berlitz which is based in princeton n.j. provides language instruction and translation services through more than NUM language centers in NUM countries </s> in the past five years more than NUM NUM of its sales have been outside the u.s. </s> macmillan has owned berlitz since NUM </s> in the first six months said that despite losses on ual stock his firm 's health is excellent </s> the stock 's decline also has left the ual board in a UNK </s> although it may not be legally obligated to sell the company if the buy-out group ca n't revive its bid it may have to explore alternatives if the buyers come back with a bid much lower than the group 's original $ 300-a-share proposal </s> at a meeting sept. NUM to consider the labor-management bid the board also was informed by its investment adviser first boston corp. of interest expressed by buy-out funds including kohlberg kravis roberts & co. and UNK little & co. as well as by robert bass morgan stanley 's buy-out fund and pan am corp </s> the takeover-stock traders were hoping that mr. davis or one of the other interested parties might UNK with the situation in disarray or that the board might consider a recapitalization </s> meanwhile japanese bankers said they were still UNK about accepting citicorp 's latest proposal </s> macmillan inc. said it plans a public offering of NUM million shares of its berlitz international inc. unit at $ NUM to $ NUM a share capital markets to sell its hertz equipment rental corp. unit </s> there is no pressing need to sell the unit but we are doing it so we can concentrate on our core business UNK automobiles in the u.s. and abroad said william UNK hertz 's executive vice president </s> we are only going to sell at the right price </s> hertz equipment had operating profit before depreciation of $ NUM million on revenue of $ NUM million in NUM </s> the closely held hertz corp. had annual revenue of close to $ NUM billion in NUM of which $ NUM billion was contributed by its hertz rent a car operations world-wide </s> hertz equipment is a major supplier of rental equipment in the u.s. france spain and the UNK </s> it supplies commercial and industrial equipment including UNK UNK UNK and electrical equipment UNK UNK UNK and trucks </s> UNK inc. reported a net loss of $ NUM million for the fiscal third quarter ended aug. NUM </s> it said the loss resulted from UNK and introduction costs related to a new medical UNK equipment system </s> in the year-earlier quarter the company reported net income of $ NUM or acquisition of nine businesses that make up the group the biggest portion of which was related to the NUM purchase of a UNK co. unit </s> among other things the restructured facilities will substantially reduce the group 's required amortization of the term loan portion of the credit facilities through september NUM mlx said </s> certain details of the restructured facilities remain to be negotiated </s> the agreement is subject to completion of a definitive amendment and appropriate approvals </s> william p. UNK mlx chairman and chief executive said the pact will provide mlx with the additional time and flexibility necessary to complete the restructuring of the company 's capital structure </s> mlx has filed a registration statement with the securities and exchange commission covering a proposed offering of $ NUM million in long-term senior subordinated notes and warrants </s> dow jones & co. said it acquired a NUM NUM interest in UNK corp. a subsidiary of oklahoma publishing co. oklahoma city that provides electronic research services </s> terms were n't disclosed </s> customers of either UNK or dow jones UNK are able to access the information on both services </s> dow jones is the publisher of the wall street video games electronic information systems and playing cards posted a NUM NUM unconsolidated surge in pretax profit to NUM billion yen $ NUM million from NUM billion yen $ NUM million for the fiscal year ended aug. NUM </s> sales surged NUM NUM to NUM billion yen from NUM billion </s> net income rose NUM NUM to NUM billion yen from NUM billion </s> UNK net fell to NUM yen from NUM yen because of expenses and capital adjustments </s> without detailing specific product UNK UNK credited its bullish UNK in sales including advanced computer games and television entertainment systems to surging UNK sales in foreign markets </s> export sales for leisure items alone for instance totaled NUM billion yen in the NUM months up from NUM billion in the previous fiscal year </s> domestic leisure sales however were lower </s> hertz corp. of park UNK n.j. said it retained merrill lynch capital markets to sell its hertz equipment rental corp. unit </s> there is no pressing need to sell the unit but we are doing it so we can concentrate on our core business UNK automobiles in the u.s. and abroad said william UNK hertz 's executive vice president so-called road show to market the package around the world </s> an increasing number of banks appear to be considering the option Figure 6: Failure of neural cache on PTB. Lightly shaded regions show flat distribution. words are predicted from the rough representation of faraway context instead of specific occurrences of certain words. 6.2 How does the cache help? If LSTMs can already regenerate words from nearby context, how are copy mechanisms helping the model? We answer this question by analyzing how the neural cache model (Grave et al., 2017b) helps with improving model performance. The cache records the hidden state ht at each timestep t, and computes a cache distribution over the words in the history as follows: Pcache(wt|wt−1, . . . , w1; ht, . . . , h1) / t−1 X i=1 [wi = wt] exp(✓hT i ht), (5) where ✓controls the flatness of the distribution. This cache distribution is then interpolated with the model’s output distribution over the vocabulary. Consequently, certain words from the history are upweighted, encouraging the model to copy them. Caches help words that can be copied from long-range context the most. In order to study the effectiveness of the cache for the three classes of words (Cnear, Cfar, Cnone), we evaluate an LSTM language model with and without a cache, and measure the difference in perplexity for these words. In both settings, the model is provided all prior context (not just 300 tokens) in orFigure 7: Model performance relative to using a cache. Error bars represent 95% confidence intervals. Words that can only be copied from the distant context benefit the most from using a cache. der to replicate the Grave et al. (2017b) setup. The amount of history recorded, known as the cache size, is a hyperparameter set to 500 past timesteps for PTB and 3,875 for Wiki, both values very similar to the average document lengths in the respective datasets. We find that the cache helps words that can only be copied from long-range context (Cfar) more than words that can be copied from nearby (Cnear). This is illustrated by Figure 7 where without caching, Cnear words see a 22% increase in perplexity for PTB, and a 32% increase for Wiki, whereas Cfar see a 28% increase in perplexity for PTB, and a whopping 53% increase for Wiki. Thus, the cache is, in a sense, complementary to the standard model, since it especially helps regenerate words from the long-range context where the latter falls short. 292 However, the cache also hurts about 36% of the words in PTB and 20% in Wiki, which are words that cannot be copied from context (Cnone), as illustrated by bars for “none” in Figure 7. We also provide some case studies showing success (Fig. 5) and failure (Fig. 6) modes for the cache. We find that for the successful case, the cache distribution is concentrated on a single word that it wants to copy. However, when the target is not present in the history, the cache distribution is more flat, illustrating the model’s confusion, shown in Figure 6. This suggests that the neural cache model might benefit from having the option to ignore the cache when it cannot make a confident choice. 7 Discussion The findings presented in this paper provide a great deal of insight into how LSTMs model context. This information can prove extremely useful for improving language models. For instance, the discovery that some word types are more important than others can help refine word dropout strategies by making them adaptive to the different word types. Results on the cache also show that we can further improve performance by allowing the model to ignore the cache distribution when it is extremely uncertain, such as in Figure 6. Differences in nearby vs. long-range context suggest that memory models, which feed explicit context representations to the LSTM (Ghosh et al., 2016; Lau et al., 2017), could benefit from representations that specifically capture information orthogonal to that modeled by the LSTM. In addition, the empirical methods used in this study are model-agnostic and can generalize to models other than the standard LSTM. This opens the path to generating a stronger understanding of model classes beyond test set perplexities, by comparing them across additional axes of information such as how much context they use on average, or how robust they are to shuffled contexts. Given the empirical nature of this study and the fact that the model and data are tightly coupled, separating model behavior from language characteristics, has proved challenging. More specifically, a number of confounding factors such as vocabulary size, dataset size etc. make this separation difficult. In an attempt to address this, we have chosen PTB and Wiki - two standard language modeling datasets which are diverse in content (news vs. factual articles) and writing style, and are structured differently (eg: Wiki articles are 4-6x longer on average and contain extra information such as titles and paragraph/section markers). Making the data sources diverse in nature, has provided the opportunity to somewhat isolate effects of the model, while ensuring consistency in results. An interesting extension to further study this separation would lie in experimenting with different model classes and even different languages. Recently, Chelba et al. (2017), in proposing a new model, showed that on PTB, an LSTM language model with 13 tokens of context is similar to the infinite-context LSTM performance, with close to an 8% 5 increase in perplexity. This is compared to a 25% increase at 13 tokens of context in our setup. We believe this difference is attributed to the fact that their model was trained with restricted context and a different error propagation scheme, while ours is not. Further investigation would be an interesting direction for future work. 8 Conclusion In this analytic study, we have empirically shown that a standard LSTM language model can effectively use about 200 tokens of context on two benchmark datasets, regardless of hyperparameter settings such as model size. It is sensitive to word order in the nearby context, but less so in the long-range context. In addition, the model is able to regenerate words from nearby context, but heavily relies on caches to copy words from far away. These findings not only help us better understand these models but also suggest ways for improving them, as discussed in Section 7. While observations in this paper are reported at the token level, deeper understanding of sentence-level interactions warrants further investigation, which we leave to future work. Acknowledgments We thank Arun Chaganty, Kevin Clark, Reid Pryzant, Yuhao Zhang and our anonymous reviewers for their thoughtful comments and suggestions. We gratefully acknowledge support of the DARPA Communicating with Computers (CwC) program under ARO prime contract no. W911NF15-10462 and the NSF via grant IIS-1514268. 5Table 3, 91 perplexity for the 13-gram vs. 84 for the infinite context model. 293 References Yossi Adi, Einat Kermany, Yonatan Belinkov, Ofer Lavi, and Yoav Goldberg. 2017. Finegrained analysis of sentence embeddings using auxiliary prediction tasks. International Conference on Learning Representations (ICLR) https://openreview.net/pdf?id=BJh6Ztuxl. Dzmitry Bahdanau, Kyunghyun Cho, and Yoshua Bengio. 2015. Neural machine translation by jointly learning to align and translate. International Conference on Learning Representations (ICLR) https://arxiv.org/pdf/1409.0473.pdf. Jordan Boyd-Graber and David Blei. 2009. Syntactic topic models. In Advances in neural information processing systems. pages 185– 192. https://papers.nips.cc/paper/3398-syntactictopic-models.pdf. Ciprian Chelba, Mohammad Norouzi, and Samy Bengio. 2017. N-gram language modeling using recurrent neural network estimation. arXiv preprint arXiv:1703.10724 https://arxiv.org/pdf/1703.10724.pdf. Yann N Dauphin, Angela Fan, Michael Auli, and David Grangier. 2017. Language modeling with gated convolutional networks. International Conference on Machine Learning (ICML) https://arxiv.org/pdf/1612.08083.pdf. Yarin Gal and Zoubin Ghahramani. 2016. A theoretically grounded application of dropout in recurrent neural networks. In Advances in neural information processing systems (NIPS). pages 1019–1027. https://arxiv.org/pdf/1512.05287.pdf. Shalini Ghosh, Oriol Vinyals, Brian Strope, Scott Roy, Tom Dean, and Larry Heck. 2016. Contextual lstm (clstm) models for large scale nlp tasks. Workshop on Large-scale Deep Learning for Data Mining, KDD https://arxiv.org/pdf/1602.06291.pdf. Edouard Grave, Moustapha M Cisse, and Armand Joulin. 2017a. Unbounded cache model for online language modeling with open vocabulary. In Advances in Neural Information Processing Systems (NIPS). pages 6044–6054. https://papers.nips.cc/paper/7185-unboundedcache-model-for-online-language-modeling-withopen-vocabulary.pdf. Edouard Grave, Armand Joulin, and Nicolas Usunier. 2017b. Improving Neural Language Models with a Continuous Cache. International Conference on Learning Representations (ICLR) https://openreview.net/pdf?id=B184E5qee. Alex Graves. 2013. Generating sequences with recurrent neural networks. arXiv preprint arXiv:1308.0850 https://arxiv.org/pdf/1308.0850.pdf. Felix Hill, Antoine Bordes, Sumit Chopra, and Jason Weston. 2016. The goldilocks principle: Reading children’s books with explicit memory representations. International Conference on Learning Representations (ICLR) https://arxiv.org/pdf/1511.02301.pdf. Sepp Hochreiter and J¨urgen Schmidhuber. 1997. Long short-term memory. Neural computation 9(8):1735–1780. https://doi.org/10.1162/neco.1997.9.8.1735. Hakan Inan, Khashayar Khosravi, and Richard Socher. 2017. Tying word vectors and word classifiers: A loss framework for language modeling. International Conference on Learning Representations (ICLR) https://openreview.net/pdf?id=r1aPbsFle. Rafal Jozefowicz, Oriol Vinyals, Mike Schuster, Noam Shazeer, and Yonghui Wu. 2016. Exploring the limits of language modeling. arXiv preprint arXiv:1602.02410 https://arxiv.org/pdf/1602.02410.pdf. Jey Han Lau, Timothy Baldwin, and Trevor Cohn. 2017. Topically Driven Neural Language Model. Association for Computational Linguistics (ACL) https://doi.org/10.18653/v1/P17-1033. Jiwei Li, Xinlei Chen, Eduard Hovy, and Dan Jurafsky. 2016. Visualizing and understanding neural models in nlp. North American Association of Computational Linguistics (NAACL) http://www.aclweb.org/anthology/N16-1082. Tal Linzen, Emmanuel Dupoux, and Yoav Goldberg. 2016. Assessing the ability of lstms to learn syntax-sensitive dependencies. Transactions of the Association for Computational Linguistics (TACL) http://aclweb.org/anthology/Q16-1037. Christopher Manning, Mihai Surdeanu, John Bauer, Jenny Finkel, Steven Bethard, and David McClosky. 2014. The stanford corenlp natural language processing toolkit. In Proceedings of 52nd annual meeting of the association for computational linguistics: system demonstrations. pages 55–60. https://doi.org/10.3115/v1/P14-5010. Mitchell P Marcus, Mary Ann Marcinkiewicz, and Beatrice Santorini. 1993. Building a large annotated corpus of english: The penn treebank. Computational linguistics 19(2):313–330. http://aclweb.org/anthology/J93-2004. Gabor Melis, Chris Dyer, and Phil Blunsom. 2018. On the State of the Art of Evaluation in Neural Language Models. International Conference on Learning Representations (ICLR) https://openreview.net/pdf?id=ByJHuTgA-. Stephen Merity, Nitish Shirish Keskar, and Richard Socher. 2018. Regularizing and Optimizing LSTM Language Models. International Conference on Learning Representations (ICLR) https://openreview.net/pdf?id=SyyGPP0TZ. 294 Stephen Merity, Caiming Xiong, James Bradbury, and Richard Socher. 2017. Pointer Sentinel Mixture Models. International Conference on Learning Representations (ICLR) https://openreview.net/pdf?id=Byj72udxe. Tom´aˇs Mikolov, Martin Karafi´at, Luk´aˇs Burget, Jan ˇCernock`y, and Sanjeev Khudanpur. 2010. Recurrent neural network based language model. In Eleventh Annual Conference of the International Speech Communication Association. Ofir Press and Lior Wolf. 2017. Using the output embedding to improve language models. European Chapter of the Association for Computational Linguistics http://aclweb.org/anthology/E17-2025. Li Wan, Matthew Zeiler, Sixin Zhang, Yann Le Cun, and Rob Fergus. 2013. Regularization of neural networks using dropconnect. In International Conference on Machine Learning (ICML). pages 1058– 1066. Tian Wang and Kyunghyun Cho. 2016. Larger-Context Language Modelling with Recurrent Neural Network. Association for Computational Linguistics (ACL) https://doi.org/10.18653/v1/P16-1125. Zhilin Yang, Zihang Dai, Ruslan Salakhutdinov, and William W Cohen. 2018. Breaking the softmax bottleneck: a high-rank rnn language model. International Conference on Learning Representations (ICLR) https://openreview.net/pdf?id=HkwZSGCZ.
2018
27
Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Long Papers), pages 295–305 Melbourne, Australia, July 15 - 20, 2018. c⃝2018 Association for Computational Linguistics 295 SoPa: Bridging CNNs, RNNs, and Weighted Finite-State Machines Roy Schwartz⇤}~ Sam Thomson⇤| Noah A. Smith} }Paul G. Allen School of Computer Science & Engineering, University of Washington |Language Technologies Institute, Carnegie Mellon University ~Allen Institute for Artificial Intelligence {roysch,nasmith}@cs.washington.edu, [email protected] Abstract Recurrent and convolutional neural networks comprise two distinct families of models that have proven to be useful for encoding natural language utterances. In this paper we present SoPa, a new model that aims to bridge these two approaches. SoPa combines neural representation learning with weighted finite-state automata (WFSAs) to learn a soft version of traditional surface patterns. We show that SoPa is an extension of a one-layer CNN, and that such CNNs are equivalent to a restricted version of SoPa, and accordingly, to a restricted form of WFSA. Empirically, on three text classification tasks, SoPa is comparable or better than both a BiLSTM (RNN) baseline and a CNN baseline, and is particularly useful in small data settings. 1 Introduction Recurrent neural networks (RNNs; Elman, 1990) and convolutional neural networks (CNNs; LeCun, 1998) are two of the most useful text representation learners in NLP (Goldberg, 2016). These methods are generally considered to be quite different: the former encodes an arbitrarily long sequence of text, and is highly expressive (Siegelmann and Sontag, 1995). The latter is more local, encoding fixed length windows, and accordingly less expressive. In this paper, we seek to bridge the gap between RNNs and CNNs, presenting SoPa (for Soft Patterns), a model that lies in between them. SoPa is a neural version of a weighted finitestate automaton (WFSA), with a restricted set of transitions. Linguistically, SoPa is appealing as it ⇤The first two authors contributed equally. START 1 2 3 4 END What a great X ! funny, magical ✏ Figure 1: A representation of a surface pattern as a six-state automaton. Self-loops allow for repeatedly inserting words (e.g., “funny”). ✏-transitions allow for dropping words (e.g., “a”). is able to capture a soft notion of surface patterns (e.g., “what a great X !”; Hearst, 1992), where some words may be dropped, inserted, or replaced with similar words (see Figure 1). From a modeling perspective, SoPa is interesting because WFSAs are well-studied and come with efficient and flexible inference algorithms (Mohri, 1997; Eisner, 2002) that SoPa can take advantage of. SoPa defines a set of soft patterns of different lengths, with each pattern represented as a WFSA (Section 3). While the number and lengths of the patterns are hyperparameters, the patterns themselves are learned end-to-end. SoPa then represents a document with a vector that is the aggregate of the scores computed by matching each of the patterns with each span in the document. Because SoPa defines a hidden state that depends on the input token and the previous state, it can be thought of as a simple type of RNN. We show that SoPa is an extension of a onelayer CNN (Section 4). Accordingly, one-layer CNNs can be viewed as a collection of linearchain WFSAs, each of which can only match fixed-length spans, while our extension allows matches of flexible-length. As a simple type of RNN that is more expressive than a CNN, SoPa helps to link CNNs and RNNs. 296 To test the utility of SoPa, we experiment with three text classification tasks (Section 5). We compare against four baselines, including both a bidirectional LSTM and a CNN. Our model performs on par with or better than all baselines on all tasks (Section 6). Moreover, when training with smaller datasets, SoPa is particularly useful, outperforming all models by substantial margins. Finally, building on the connections discovered in this paper, we offer a new, simple method to interpret SoPa (Section 7). This method applies equally well to CNNs. We release our code at https://github.com/ Noahs-ARK/soft_patterns. 2 Background Surface patterns. Patterns (Hearst, 1992) are particularly useful tool in NLP (Lin et al., 2003; Etzioni et al., 2005; Schwartz et al., 2015). The most basic definition of a pattern is a sequence of words and wildcards (e.g., “X is a Y”), which can either be manually defined or extracted from a corpus using cooccurrence statistics. Patterns can then be matched against a specific text span by replacing wildcards with concrete words. Davidov et al. (2010) introduced a flexible notion of patterns, which supports partial matching of the pattern with a given text by skipping some of the words in the pattern, or introducing new words. In their framework, when a sequence of text partially matches a pattern, hard-coded partial scores are assigned to the pattern match. Here, we represent patterns as WFSAs with neural weights, and support these partial matches in a soft manner. WFSAs. We review weighted finite-state automata with ✏-transitions before we move on to our special case in Section 3. A WFSA-✏with d states over a vocabulary V is formally defined as a tuple F = h⇡, T, ⌘i, where ⇡2 Rd is an initial weight vector, T : (V [ {✏}) ! Rd⇥d is a transition weight function, and ⌘2 Rd is a final weight vector. Given a sequence of words in the vocabulary x = hx1, . . . , xni, the Forward algorithm (Baum and Petrie, 1966) scores x with respect to F. Without ✏-transitions, Forward can be written as a series of matrix multiplications: p0 span(x) = ⇡> n Y i=1 T(xi) ! ⌘ (1) ✏-transitions are followed without consuming a word, so Equation 1 must be updated to reflect the possibility of following any number (zero or more) of ✏-transitions in between consuming each word: pspan(x) = ⇡>T(✏)⇤ n Y i=1 T(xi)T(✏)⇤ ! ⌘ (2) where ⇤is matrix asteration: A⇤:= P1 j=0 Aj. In our experiments we use a first-order approximation, A⇤⇡I + A, which corresponds to allowing zero or one ✏-transition at a time. When the FSA F is probabilistic, the result of the Forward algorithm can be interpreted as the marginal probability of all paths through F while consuming x (hence the symbol “p”). The Forward algorithm can be generalized to any semiring (Eisner, 2002), a fact that we make use of in our experiments and analysis.1 The vanilla version of Forward uses the sum-product semiring: ⊕is addition, ⌦is multiplication. A special case of Forward is the Viterbi algorithm (Viterbi, 1967), which sets ⊕to the max operator. Viterbi finds the highest scoring path through F while consuming x. Both Forward and Viterbi have runtime O(d3 + d2n), requiring just a single linear pass through the phrase. Using firstorder approximate asteration, this runtime drops to O(d2n).2 Finally, we note that Forward scores are for exact matches—the entire phrase must be consumed. We show in Section 3.2 how phrase-level scores can be summarized into a document-level score. 3 SoPa: A Weighted Finite-State Automaton RNN We introduce SoPa, a WFSA-based RNN, which is designed to represent text as collection of surface pattern occurrences. We start by showing how a single pattern can be represented as a WFSA-✏ (Section 3.1). Then we describe how to score a complete document using a pattern (Section 3.2), and how multiple patterns can be used to encode a document (Section 3.3). Finally, we show that SoPa can be seen as a simple variant of an RNN (Section 3.4). 1The semiring parsing view (Goodman, 1999) has produced unexpected connections in the past (Eisner, 2016). We experiment with max-product and max-sum semirings, but note that our model could be easily updated to use any semiring. 2In our case, we also use a sparse transition matrix (Section 3.1), which further reduces our runtime to O(dn). 297 3.1 Patterns as WFSAs We describe how a pattern can be represented as a WFSA-✏. We first assume a single pattern. A pattern is a WFSA-✏, but we impose hard constraints on its shape, and its transition weights are given by differentiable functions that have the power to capture concrete words, wildcards, and everything in between. Our model is designed to behave similarly to flexible hard patterns (see Section 2), but to be learnable directly and “end-to-end” through backpropagation. Importantly, it will still be interpretable as simple, almost linear-chain, WFSA-✏. Each pattern has a sequence of d states (in our experiments we use patterns of varying lengths between 2 and 7). Each state i has exactly three possible outgoing transitions: a self-loop, which allows the pattern to consume a word without moving states, a main path transition to state i + 1 which allows the pattern to consume one token and move forward one state, and an ✏-transition to state i + 1, which allows the pattern to move forward one state without consuming a token. All other transitions are given score 0. When processing a sequence of text with a pattern p, we start with a special START state, and only move forward (or stay put), until we reach the special END state.3 A pattern with d states will tend to match token spans of length d −1 (but possibly shorter spans due to ✏-transitions, or longer spans due to self-loops). See Figure 1 for an illustration. Our transition function, T, is a parameterized function that returns a d ⇥d matrix. For a word x: [T(x)]i,j = 8 > < > : E(ui · vx + ai), if j = i (self-loop) E(wi · vx + bi), if j = i + 1 0, otherwise, (3) where ui and wi are vectors of parameters, ai and bi are scalar parameters, vx is a fixed pre-trained word vector for x,4 and E is an encoding function, typically the identity function or sigmoid. ✏-transitions are also parameterized, but don’t consume a token and depend only on the current state: [T(✏)]i,j = ( E(ci), if j = i + 1 0, otherwise, (4) where ci is a scalar parameter.5 As we have only 3To ensure that we start in the START state and end in the END state, we fix ⇡= [1, 0, . . . , 0] and ⌘= [0, . . . , 0, 1]. 4We use GloVe 300d 840B (Pennington et al., 2014). 5Adding ✏-transitions to WFSAs does not increase their three non-zero diagonals in total, the matrix multiplications in Equation 2 can be implemented using vector operations, and the overall runtimes of Forward and Viterbi are reduced to O(dn).6 Words vs. wildcards. Traditional hard patterns distinguish between words and wildcards. Our model does not explicitly capture the notion of either, but the transition weight function can be interpreted in those terms. Each transition is a logistic regression over the next word vector vx. For example, for a main path out of state i, T has two parameters, wi and bi. If wi has large magnitude and is close to the word vector for some word y (e.g., wi ⇡100vy), and bi is a large negative bias (e.g., bi ⇡−100), then the transition is essentially matching the specific word y. Whereas if wi has small magnitude (wi ⇡0) and bi is a large positive bias (e.g., bi ⇡100), then the transition is ignoring the current token and matching a wildcard.7 The transition could also be something in between, for instance by focusing on specific dimensions of a word’s meaning encoded in the vector, such as POS or semantic features like animacy or concreteness (Rubinstein et al., 2015; Tsvetkov et al., 2015). 3.2 Scoring Documents So far we described how to calculate how well a pattern matches a token span exactly (consuming the whole span). To score a complete document, we prefer a score that aggregates over all matches on subspans of the document (similar to “search” instead of “match” in regular expression parlance). We still assume a single pattern. Either the Forward algorithm can be used to calculate the expected count of the pattern in the document, P 1ijn pspan(xi:j), or Viterbi to calculate sdoc(x) = max1ijn sspan(xi:j), the score of the highest-scoring match. In short documents, we expect patterns to typically occur at most once, so in our experiments we choose the Viterbi algorithm, i.e., the max-product semiring. Implementation details. We give the specific recurrences we use to score documents in a single expressive power, and in fact slightly complicates the Forward equations. We use them as they require fewer parameters, and make the modeling connection between (hard) flexible patterns and our (soft) patterns more direct and intuitive. 6Our implementation is optimized to run on GPUs, so the observed runtime is even sublinear in d. 7A large bias increases the eagerness to match any word. 298 pass with this model. We define: [maxmul(A, B)]i,j = max k Ai,kBk,j. (5) We also define the following for taking zero or one ✏-transitions: eps (h) = maxmul (h, max(I, T(✏))) (6) where max is element-wise max. We maintain a row vector ht at each token:8 h0 = eps(⇡>), (7a) ht+1 = max (eps(maxmul (ht, T(xt+1))), h0), (7b) and then extract and aggregate END state values: st = maxmul (ht, ⌘), (8a) sdoc = max 1tn st. (8b) [ht]i represents the score of the best path through the pattern that ends in state i after consuming t tokens. By including h0 in Equation 7b, we are accounting for spans that start at time t + 1. st is the maximum of the exact match scores for all spans ending at token t. And sdoc is the maximum score of any subspan in the document. 3.3 Aggregating Multiple Patterns We describe how k patterns are aggregated to score a document. These k patterns give k different sdoc scores for the document, which are stacked into a vector z 2 Rk and constitute the final document representation of SoPa. This vector representation can be viewed as a feature vector. In this paper, we feed it into a multilayer perceptron (MLP), culminating in a softmax to give a probability distribution over document labels. We minimize cross-entropy, allowing the SoPa and MLP parameters to be learned end-to-end. SoPa uses a total of (2e + 3)dk parameters, where e is the word embedding dimension, d is the number of states and k is the number of patterns. For comparison, an LSTM with a hidden dimension of h has 4((e + 1)h + h2). In Section 6 we show that SoPa consistently uses fewer parameters than a BiLSTM baseline to achieve its best result. 8Here a row vector h of size n can also be viewed as a 1 ⇥n matrix. 3.4 SoPa as an RNN SoPa can be considered an RNN. As shown in Section 3.2, a single pattern with d states has a hidden state vector of size d. Stacking the k hidden state vectors of k patterns into one vector of size k ⇥d can be thought of as the hidden state of our model. This hidden state is, like in any other RNN, dependent of the input and the previous state. Using selfloops, the hidden state at time point i can in theory depend on the entire history of tokens up to xi (see Figure 2 for illustration). We do want to discourage the model from following too many self-loops, only doing so if it results in a better fit with the remainder of the pattern. To do this we use the sigmoid function as our encoding function E (see Equation 3), which means that all transitions have scores strictly less than 1. This works to keep pattern matches close to their intended length. Using other encoders, such as the identity function, can result in different dynamics, potentially encouraging rather than discouraging self-loops. Although even single-layer RNNs are Turing complete (Siegelmann and Sontag, 1995), SoPa’s expressive power depends on the semiring. When a WFSA is thought of as a function from finite sequences of tokens to semiring values, it is restricted to the class of functions known as rational series (Sch¨utzenberger, 1961; Droste and Gastin, 1999; Sakarovitch, 2009).9 It is unclear how limiting this theoretical restriction is in practice, especially when SoPa is used as a component in a larger network. We defer the investigation of the exact computational properties of SoPa to future work. In the next section, we show that SoPa is an extension of a one-layer CNN, and hence more expressive. 4 SoPa as a CNN Extension A convolutional neural network (CNN; LeCun, 1998) moves a fixed-size sliding window over the document, producing a vector representation for each window. These representations are then often summed, averaged, or max-pooled to produce a document-level representation (Kim, 2014; Yin and Sch¨utze, 2015). In this section, we show that SoPa is an extension of one-layer, max-pooled CNNs. To recover a CNN from a soft pattern with d+1 states, we first remove self-loops and ✏-transitions, 9Rational series generalize recognizers of regular languages, which are the special case of the Boolean semiring. 299 Fielding’s funniest and most likeable book in years max-pooled END states pattern1 states word vectors pattern2 states START states Figure 2: State activations of two patterns as they score a document. pattern1 (length three) matches on “in years”. pattern2 (length five) matches on “funniest and most likeable book”, using a self-loop to consume the token “most”. Active states in the best match are marked with arrow cursors. retaining only the main path transitions. We also use the identity function as our encoder E (Equation 3), and use the max-sum semiring. With only main path transitions, the network will not match any span that is not exactly d tokens long. Using max-sum, spans of length d will be assigned the score: sspan(xi:i+d) = d−1 X j=0 wj · vxi+j + bj, (9a) =w0:d · vxi:i+d + d−1 X j=0 bj, (9b) where w0:d = [w> 0 ; . . . ; w> d−1]>, vxi:i+d = [v> xi; . . . ; v> xi+d−1]>. Rearranged this way, we recognize the span score as an affine transformation of the concatenated word vectors vxi:i+d. If we use k patterns, then together their span scores correspond to a linear filter with window size d and output dimension k.10 A single pattern’s score for a document is: sdoc(x) = max 1in−d+1 sspan(xi:i+d). (10) The max in Equation 10 is calculated for each pattern independently, corresponding exactly to element-wise max-pooling of the CNN’s output layer. Based on the equivalence between this impoverished version of SoPa and CNNs, we conclude that one-layer CNNs are learning an even more 10This variant of SoPa has d bias parameters, which correspond to only a single bias parameter in a CNN. The redundant biases may affect optimization but are an otherwise unimportant difference. restricted class of WFSAs (linear-chain WFSAs) that capture only fixed-length patterns. One notable difference between SoPa and arbitrary CNNs is that in general CNNs can use any filter (like an MLP over vxi:i+d, for example). In contrast, in order to efficiently pool over flexiblelength spans, SoPa is restricted to operations that follow the semiring laws.11 As a model that is more flexible than a one-layer CNN, but (arguably) less expressive than many RNNs, SoPa lies somewhere on the continuum between these two approaches. Continuing to study the bridge between CNNs and RNNs is an exciting direction for future research. 5 Experiments To evaluate SoPa, we apply it to text classification tasks. Below we describe our datasets and baselines. More details can be found in Appendix A. Datasets. We experiment with three binary classification datasets. • SST. The Stanford Sentiment Treebank (Socher et al., 2013)12 contains roughly 10K movie reviews from Rotten Tomatoes,13 labeled on a scale of 1–5. We consider the binary task, which considers 1 and 2 as negative, and 4 and 5 as positive (ignoring 3s). It is worth noting that this dataset also contains syntactic phrase level annotations, providing a sentiment label to parts of 11The max-sum semiring corresponds to a linear filter with max-pooling. Other semirings could potentially model more interesting interactions, but we leave this to future work. 12https://nlp.stanford.edu/sentiment/ index.html 13http://www.rottentomatoes.com 300 sentences. In order to experiment in a realistic setup, we only consider the complete sentences, and ignore syntactic annotations at train or test time. The number of training/development/test sentences in the dataset is 6,920/872/1,821. • Amazon. The Amazon Review Corpus (McAuley and Leskovec, 2013)14 contains electronics product reviews, a subset of a larger review dataset. Each document in the dataset contains a review and a summary. Following Yogatama et al. (2015), we only use the reviews part, focusing on positive and negative reviews. The number of training/development/test samples is 20K/5K/25K. • ROC. The ROC story cloze task (Mostafazadeh et al., 2016) is a story understanding task.15 The task is composed of four-sentence story prefixes, followed by two competing endings: one that makes the joint five-sentence story coherent, and another that makes it incoherent. Following Schwartz et al. (2017), we treat it as a style detection task: we treat all “right” endings as positive samples and all “wrong” ones as negative, and we ignore the story prefix. We split the development set into train and development (of sizes 3,366 and 374 sentences, respectively), and take the test set as-is (3,742 sentences). Reduced training data. In order to test our model’s ability to learn from small datasets, we also randomly sample 100, 500, 1,000 and 2,500 SST training instances and 100, 500, 1,000, 2,500, 5,000, and 10,000 Amazon training instances. Development and test sets remain the same. Baselines. We compare to four baselines: a BiLSTM, a one-layer CNN, DAN (a simple alternative to RNNs) and a feature-based classifier trained with hard-pattern features. • BiLSTM. Bidirectional LSTMs have been successfully used in the past for text classification tasks (Zhou et al., 2016). We learn a one-layer BiLSTM representation of the document, and feed the average of all hidden states to an MLP. • CNN. CNNs are particularly useful for text classification (Kim, 2014). We train a one-layer CNN with max-pooling, and feed the resulting representation to an MLP. 14http://riejohnson.com/cnn_data.html 15http://cs.rochester.edu/nlp/ rocstories/ • DAN. We learn a deep averaging network with word dropout (Iyyer et al., 2015), a simple but strong text-classification baseline. • Hard. We train a logistic regression classifier with hard-pattern features. Following Tsur et al. (2010), we replace low frequency words with a special wildcard symbol. We learn sequences of 1–6 concrete words, where any number of wildcards can come between two adjacent words. We consider words occurring with frequency of at least 0.01% of our training set as concrete words, and words occurring in frequency 1% or less as wildcards.16 Number of patterns. SoPa requires specifying the number of patterns to be learned, and their lengths. Preliminary experiments showed that the model doesn’t benefit from more than a few dozen patterns. We experiment with several configurations of patterns of different lengths, generally considering 0, 10 or 20 patterns of each pattern length between 2–7. The total number of patterns learned ranges between 30–70.17 6 Results Table 1 shows our main experimental results. In two of the cases (SST and ROC), SoPa outperforms all models. On Amazon, SoPa performs within 0.3 points of CNN and BiLSTM, and outperforms the other two baselines. The table also shows the number of parameters used by each model for each task. Given enough data, models with more parameters should be expected to perform better. However, SoPa performs better or roughly the same as a BiLSTM, which has 3–6 times as many parameters. Figure 3 shows a comparison of all models on the SST and Amazon datasets with varying training set sizes. SoPa is substantially outperforming all baselines, in particular BiLSTM, on small datasets (100 samples). This suggests that SoPa is better fit to learn from small datasets. Ablation analysis. Table 1 also shows an ablation of the differences between SoPa and CNN: max-product semiring with sigmoid vs. max-sum semiring with identity, self-loops, and ✏-transitions. The last line is equivalent to a CNN with 16Some words may serve as both words and wildcards. See Davidov and Rappoport (2008) for discussion. 17The number of patterns and their length are hyperparameters tuned on the development data (see Appendix A). 301 Model ROC SST Amazon Hard 62.2 (4K) 75.5 (6K) 88.5 (67K) DAN 64.3 (91K) 83.1 (91K) 85.4 (91K) BiLSTM 65.2 (844K) 84.8 (1.5M) 90.8 (844K) CNN 64.3 (155K) 82.2 (62K) 90.2 (305K) SoPa 66.5 (255K) 85.6 (255K) 90.5 (256K) SoPams1 64.4 84.8 90.0 SoPams1\{sl} 63.2 84.6 89.8 SoPams1\{✏} 64.3 83.6 89.7 SoPams1\{sl, ✏} 64.0 85.0 89.5 Table 1: Test classification accuracy (and the number of parameters used). The bottom part shows our ablation results: SoPa: our full model. SoPams1: running with max-sum semiring (rather than max-product), with the identity function as our encoder E (see Equation 3). sl: self-loops, ✏: ✏transitions. The final row is equivalent to a one-layer CNN. 100 1,000 10,000 60 70 80 Num. Training Samples (SST) Classification Accuracy 100 1,000 10,000 70 75 80 85 90 Num. Training Samples (Amazon) SoPa (ours) DAN Hard BiLSTM CNN Figure 3: Test accuracy on SST and Amazon with varying number of training instances. multiple window sizes. Interestingly, the most notable difference between SoPa and CNN is the semiring and encoder function, while ✏transitions and self-loops have little effect on performance.18 7 Interpretability We turn to another key aspect of SoPa—its interpretability. We start by demonstrating how we interpret a single pattern, and then describe how to interpret the decisions made by downstream classifiers that rely on SoPa—in this case, a sentence classifier. Importantly, these visualization techniques are equally applicable to CNNs. Interpreting a single pattern. In order to visualize a pattern, we compute the pattern matching scores with each phrase in our training dataset, and select the k phrases with the highest scores. Table 2 shows examples of six patterns learned using the best SoPa model on the SST dataset, as 18Although SoPa does make use of them—see Section 7. Highest Scoring Phrases Patt. 1 thoughtful , reverent portrait of and astonishingly articulate cast of entertaining , thought-provoking film with gentle , mesmerizing portrait of poignant and uplifting story in Patt. 2 ’s ✏ uninspired story . this ✏ bad on purpose this ✏ leaden comedy . a ✏ half-assed film . is ✏ clumsy ,SL the writing Patt. 3 mesmerizing portrait of a engrossing portrait of a clear-eyed portrait of an fascinating portrait of a self-assured portrait of small Patt. 4 honest , and enjoyable soulful , scathingSL and joyous unpretentious , charmingSL , quirky forceful , and beautifully energetic , and surprisingly Patt. 5 is deadly dull a numbingly dull is remarkably dull is a phlegmatic an utterly incompetent Patt. 6 five minutes four minutes final minutes first half-hour fifteen minutes Table 2: Six patterns of different lengths learned by SoPa on SST. Each group represents a single pattern p, and shows the five phrases in the training data that have the highest score for p. Columns represent pattern states. Words marked with SL are self-loops. ✏symbols indicate ✏-transitions. All other words are from main path transitions. represented by their five highest scoring phrases in the training set. A few interesting trends can be observed from these examples. First, it seems our patterns encode semantically coherent expressions. A large portion of them correspond to sentiment (the five top examples in the table), but others capture different semantics, e.g., time expressions. Second, it seems our patterns are relatively soft, and allow lexical flexibility. While some patterns do seem to fix specific words, e.g., “of” in the first example or “minutes” in the last one, even in those cases some of the top matching spans replace these words with other, similar words (“with” and “halfhour”, respectively). Encouraging SoPa to have more concrete words, e.g., by jointly learning the word vectors, might make SoPa useful in other contexts, particularly as a decoder. We defer this direction to future work. Finally, SoPa makes limited but non-negligible use of self-loops and epsilon steps. Interestingly, the second example shows that one of the pat302 Analyzed Documents it ’s dumb , but more importantly , it ’s just not scary though moonlight mile is replete with acclaimed actors and actresses and tackles a subject that ’s potentially moving , the movie is too predictable and too self-conscious to reach a level of high drama While its careful pace and seemingly opaque story may not satisfy every moviegoer ’s appetite, the film ’s final scene is soaringly , transparently moving unlike the speedy wham-bam effect of most hollywood offerings , character development – and more importantly, character empathy – is at the heart of italian for beginners . the band ’s courage in the face of official repression is inspiring , especially for aging hippies ( this one included ) . Table 3: Documents from the SST training data. Phrases with the largest contribution toward a positive sentiment classification are in bold green, and the most negative phrases are in italic orange. terns had an ✏-transition at the same place in every phrase. This demonstrates a different function of ✏-transitions than originally designed—they allow a pattern to effectively shorten itself, by learning a high ✏-transition parameter for a certain state. Interpreting a document. SoPa provides an interpretable representation of a document—a vector of the maximal matching score of each pattern with any span in the document. To visualize the decisions of our model for a given document, we can observe the patterns and corresponding phrases that score highly within it. To understand which of the k patterns contributes most to the classification decision, we apply a leave-one-out method. We run the forward method of the MLP layer in SoPa k times, each time zeroing-out the score of a different pattern p. The difference between the resulting score and the original model score is considered p’s contribution. We then consider the highest contributing patterns, and attach each one with its highest scoring phrase in that document. Table 3 shows example texts along with their most positive and negative contributing phrases. 8 Related Work Weighted finite-state automata. WFSAs and hidden Markov models19 were once popular in automatic speech recognition (Hetherington, 2004; Moore et al., 2006; Hoffmeister et al., 2012) 19HMMs are a special case of WFSAs (Mohri et al., 2002). and remain popular in morphology (Dreyer, 2011; Cotterell et al., 2015). Most closely related to this work, neural networks have been combined with weighted finite-state transducers to do morphological reinflection (Rastogi et al., 2016). These prior works learn a single FSA or FST, whereas our model learns a collection of simple but complementary FSAs, together encoding a sequence. We are the first to incorporate neural networks both before WFSAs (in their transition scoring functions), and after (in the function that turns their vector of scores into a final prediction), to produce an expressive model that remains interpretable. Recurrent neural networks. The ability of RNNs to represent arbitrarily long sequences of embedded tokens has made them attractive to NLP researchers. The most notable variants, the long short-term memory (LSTM; Hochreiter and Schmidhuber, 1997) and gated recurrent units (GRU; Cho et al., 2014), have become ubiquitous in NLP algorithms (Goldberg, 2016). Recently, several works introduced simpler versions of RNNs, such as recurrent additive networks (Lee et al., 2017) and Quasi-RNNs (Bradbury et al., 2017). Like SoPa, these models can be seen as points along the bridge between RNNs and CNNs. Other works have studied the expressive power of RNNs, in particular in the context of WFSAs or HMMs (Cleeremans et al., 1989; Giles et al., 1992; Visser et al., 2001; Chen et al., 2018). In this work we relate CNNs to WFSAs, showing that a one-layer CNN with max-pooling can be simulated by a collection of linear-chain WFSAs. Convolutional neural networks. CNNs are prominent feature extractors in NLP, both for generating character-based embeddings (Kim et al., 2016), and as sentence encoders for tasks like text classification (Yin and Sch¨utze, 2015) and machine translation (Gehring et al., 2017). Similarly to SoPa, several recently introduced variants of CNNs support varying window sizes by either allowing several fixed window sizes (Yin and Sch¨utze, 2015) or by supporting non-consecutive n-gram matching (Lei et al., 2015; Nguyen and Grishman, 2016). Neural networks and patterns. Some works used patterns as part of a neural network. Schwartz et al. (2016) used pattern contexts for estimating word embeddings, showing improved word similarity results compared to bag-of-word 303 contexts. Shwartz et al. (2016) designed an LSTM representation for dependency patterns, using them to detect hypernymy relations. Here, we learn patterns as a neural version of WFSAs. Interpretability. There have been several efforts to interpret neural models. The weights of the attention mechanism (Bahdanau et al., 2015) are often used to display the words that are most significant for making a prediction. LIME (Ribeiro et al., 2016) is another approach for visualizing neural models (not necessarily textual). Yogatama and Smith (2014) introduced structured sparsity, which encodes linguistic information into the regularization of a model, thus allowing to visualize the contribution of different bag-of-word features. Other works jointly learned to encode text and extract the span which best explains the model’s prediction (Yessenalina et al., 2010; Lei et al., 2016). Li et al. (2016) and K´ad´ar et al. (2017) suggested a method that erases pieces of the text in order to analyze their effect on a neural model’s decisions. Finally, several works presented methods to visualize deep CNNs (Zeiler and Fergus, 2014; Simonyan et al., 2014; Yosinski et al., 2015), focusing on visualizing the different layers of the network, mainly in the context of image and video understanding. We believe these two types of research approaches are complementary: inventing general purpose visualization tools for existing black-box models on the one hand, and on the other, designing models like SoPa that are interpretable by construction. 9 Conclusion We introduced SoPa, a novel model that combines neural representation learning with WFSAs. We showed that SoPa is an extension of a one-layer CNN. It naturally models flexible-length spans with insertion and deletion, and it can be easily customized by swapping in different semirings. SoPa performs on par with or strictly better than four baselines on three text classification tasks, while requiring fewer parameters than the stronger baselines. On smaller training sets, SoPa outperforms all four baselines. As a simple version of an RNN, which is more expressive than one-layer CNNs, we hope that SoPa will encourage future research on the bridge between these two mechanisms. To facilitate such research, we release our implementation at https://github.com/ Noahs-ARK/soft_patterns. Acknowledgments We thank Dallas Card, Elizabeth Clark, Peter Clark, Bhavana Dalvi, Jesse Dodge, Nicholas FitzGerald, Matt Gardner, Yoav Goldberg, Mark Hopkins, Vidur Joshi, Tushar Khot, Kelvin Luu, Mark Neumann, Hao Peng, Matthew E. Peters, Sasha Rush, Ashish Sabharwal, Minjoon Seo, Sofia Serrano, Swabha Swayamdipta, Chenhao Tan, Niket Tandon, Trang Tran, Mark Yatskar, Scott Yih, Vicki Zayats, Rowan Zellers, Luke Zettlemoyer, and several anonymous reviewers for their helpful advice and feedback. This work was supported in part by NSF grant IIS-1562364, by the Extreme Science and Engineering Discovery Environment (XSEDE), which is supported by NSF grant ACI-1548562, and by the NVIDIA Corporation through the donation of a Tesla GPU. References Dzmitry Bahdanau, Kyunghyun Cho, and Yoshua Bengio. 2015. Neural machine translation by jointly learning to align and translate. In Proc. of ICLR. Leonard E. Baum and Ted Petrie. 1966. Statistical Inference for Probabilistic Functions of Finite State Markov Chains. The Annals of Mathematical Statistics, 37(6):1554–1563. James Bradbury, Stephen Merity, Caiming Xiong, and Richard Socher. 2017. Quasi-Recurrent Neural Network. In Proc. of ICLR. Yining Chen, Sorcha Gilroy, Kevin Knight, and Jonathan May. 2018. Recurrent neural networks as weighted language recognizers. In Proc. of NAACL. Kyunghyun Cho, Bart Van Merri¨enboer, Caglar Gulcehre, Dzmitry Bahdanau, Fethi Bougares, Holger Schwenk, and Yoshua Bengio. 2014. Learning phrase representations using RNN encoder-decoder for statistical machine translation. In Proc. of EMNLP. Axel Cleeremans, David Servan-Schreiber, and James L McClelland. 1989. Finite state automata and simple recurrent networks. Neural computation, 1(3):372–381. Ryan Cotterell, Nanyun Peng, and Jason Eisner. 2015. Modeling word forms using latent underlying morphs and phonology. TACL, 3:433–447. Dmitry Davidov and Ari Rappoport. 2008. Unsupervised discovery of generic relationships using pattern clusters and its evaluation by automatically generated SAT analogy questions. In Proc. of ACL. Dmitry Davidov, Oren Tsur, and Ari Rappoport. 2010. Enhanced sentiment learning using twitter hashtags and smileys. In Proc. of COLING. 304 Markus Dreyer. 2011. A Non-parametric Model for the Discovery of Inflectional Paradigms from Plain Text Using Graphical Models over Strings. Ph.D. thesis, Johns Hopkins University, Baltimore, MD, USA. Manfred Droste and Paul Gastin. 1999. The Kleene– Sch¨utzenberger theorem for formal power series in partially commuting variables. Information and Computation, 153(1):47–80. Jason Eisner. 2002. Parameter estimation for probabilistic finite-state transducers. In Proc. of ACL. Jason Eisner. 2016. Inside-outside and forwardbackward algorithms are just backprop (tutorial paper). Jeffrey L. Elman. 1990. Finding structure in time. Cognitive science, 14(2):179–211. Oren Etzioni, Michael Cafarella, Doug Downey, AnaMaria Popescu, Tal Shaked, Stephen Soderland, Daniel S. Weld, and Alexander Yates. 2005. Unsupervised named-entity extraction from the web: An experimental study. Artificial intelligence, 165(1):91–134. Jonas Gehring, Michael Auli, David Grangier, Denis Yarats, and Yann Dauphin. 2017. Convolutional sequence to sequence learning. In Proc. of ICML. C. Lee Giles, Clifford B Miller, Dong Chen, HsingHen Chen, Guo-Zheng Sun, and Yee-Chun Lee. 1992. Learning and extracting finite state automata with second-order recurrent neural networks. Neural Computation, 4(3):393–405. Yoav Goldberg. 2016. A primer on neural network models for natural language processing. JAIR, 57:345–420. Joshua Goodman. 1999. Semiring parsing. Computational Linguistics, 25(4):573–605. Marti A. Hearst. 1992. Automatic acquisition of hyponyms from large text corpora. In Proc. of COLING. Lee Hetherington. 2004. The MIT finite-state transducer toolkit for speech and language processing. In Proc. of INTERSPEECH. Sepp Hochreiter and J¨urgen Schmidhuber. 1997. Long short-term memory. Neural computation, 9(8):1735–1780. Bj¨orn Hoffmeister, Georg Heigold, David Rybach, Ralf Schl¨uter, and Hermann Ney. 2012. WFST enabled solutions to ASR problems: Beyond HMM decoding. IEEE Transactions on Audio, Speech, and Language Processing, 20:551–564. Mohit Iyyer, Varun Manjunatha, Jordan Boyd-Graber, and Hal Daum´e III. 2015. Deep unordered composition rivals syntactic methods for text classification. In Proc. of ACL. ´Akos K´ad´ar, Grzegorz Chrupala, and Afra Alishahi. 2017. Representation of linguistic form and function in recurrent neural networks. Computational Linguistics, 43:761–780. Yoon Kim. 2014. Convolutional Neural Networks for Sentence Classification. In Proc. of EMNLP. Yoon Kim, Yacine Jernite, David Sontag, and Alexander M. Rush. 2016. Character-aware neural language models. In Proc. of AAAI. Diederik Kingma and Jimmy Ba. 2015. Adam: A method for stochastic optimization. In Proc. of ICLR. Yann LeCun. 1998. Gradient-based Learning Applied to Document Recognition. In Proc. of the IEEE. Kenton Lee, Omer Levy, and Luke Zettlemoyer. 2017. Recurrent additive networks. arXiv:1705.07393. Tao Lei, Regina Barzilay, and Tommi Jaakkola. 2015. Molding CNNs for text: non-linear, non-consecutive convolutions. In Proc. of EMNLP. Tao Lei, Regina Barzilay, and Tommi Jaakkola. 2016. Rationalizing Neural Predictions. In Proc. of EMNLP. Jiwei Li, Will Monroe, and Dan Jurafsky. 2016. Understanding Neural Networks through Representation Erasure. arXiv:1612.08220. Dekang Lin, Shaojun Zhao, Lijuan Qin, and Ming Zhou. 2003. Identifying synonyms among distributionally similar words. In Proc. of IJCAI. Julian McAuley and Jure Leskovec. 2013. Hidden factors and hidden topics: understanding rating dimensions with review text. In Proc. of RecSys. Mehryar Mohri. 1997. Finite-state transducers in language and speech processing. Computational Linguistics, 23:269–311. Mehryar Mohri, Fernando Pereira, and Michael Riley. 2002. Weighted finite-state transducers in speech recognition. Computer Speech & Language, 16(1):69–88. Darren Moore, John Dines, Mathew Magimai-Doss, Jithendra Vepa, Octavian Cheng, and Thomas Hain. 2006. Juicer: A weighted finite-state transducer speech decoder. In MLMI. Nasrin Mostafazadeh, Nathanael Chambers, Xiaodong He, Devi Parikh, Dhruv Batra, Lucy Vanderwende, Pushmeet Kohli, and James Allen. 2016. A corpus and cloze evaluation for deeper understanding of commonsense stories. In Proc. of NAACL. Thien Huu Nguyen and Ralph Grishman. 2016. Modeling Skip-Grams for Event Detection with Convolutional Neural Networks. In Proc. of EMNLP. 305 Fabian Pedregosa, Ga¨el Varoquaux, Alexandre Gramfort, Vincent Michel, Bertrand Thirion, Olivier Grisel, Mathieu Blondel, Peter Prettenhofer, Ron Weiss, Vincent Dubourg, Jake Vanderplas, Alexandre Passos, David Cournapeau, Matthieu Brucher, Matthieu Perrot, and ´Edouard Duchesnay. 2011. Scikit-learn: Machine learning in Python. JMLR, 12:2825–2830. Jeffrey Pennington, Richard Socher, and Christopher D. Manning. 2014. GloVe: Global vectors for word representation. In Proc. of EMNLP. Pushpendre Rastogi, Ryan Cotterell, and Jason Eisner. 2016. Weighting finite-state transductions with neural context. In Proc. of NAACL. Marco Tulio Ribeiro, Sameer Singh, and Carlos Guestrin. 2016. “Why should I trust you?”: Explaining the predictions of any classifier. In Proc. of KDD. Dana Rubinstein, EffiLevi, Roy Schwartz, and Ari Rappoport. 2015. How well do distributional models capture different types of semantic knowledge? In Proc. of ACL. Jacques Sakarovitch. 2009. Rational and recognisable power series. In Manfred Droste, Werner Kuich, and Heiko Vogler, editors, Handbook of Weighted Automata, pages 105–174. Springer Berlin Heidelberg, Berlin, Heidelberg. M. P. Sch¨utzenberger. 1961. On the definition of a family of automata. Information and Control, 4(2):245– 270. Roy Schwartz, Roi Reichart, and Ari Rappoport. 2015. Symmetric pattern based word embeddings for improved word similarity prediction. In Proc. of CoNLL. Roy Schwartz, Roi Reichart, and Ari Rappoport. 2016. Symmetric patterns and coordinations: Fast and enhanced representations of verbs and adjectives. In Proc. of NAACL. Roy Schwartz, Maarten Sap, Ioannis Konstas, Li Zilles, Yejin Choi, and Noah A. Smith. 2017. The effect of different writing tasks on linguistic style: A case study of the roc story cloze task. In Proc.of CoNLL. Vered Shwartz, Yoav Goldberg, and Ido Dagan. 2016. Improving hypernymy detection with an integrated path-based and distributional method. In Proc. of ACL. Hava T. Siegelmann and Eduardo D. Sontag. 1995. On the computational power of neural nets. Journal of computer and system sciences, 50(1):132–150. Karen Simonyan, Andrea Vedaldi, and Andrew Zisserman. 2014. Deep inside convolutional networks: Visualising image classification models and saliency maps. In Proc. of ICLR Workshop. Richard Socher, Alex Perelygin, Jean Wu, Jason Chuang, Christopher D. Manning, Andrew Ng, and Christopher Potts. 2013. Recursive deep models for semantic compositionality over a sentiment treebank. In Proc. of EMNLP. Oren Tsur, Dmitry Davidov, and Ari Rappoport. 2010. ICWSM–a great catchy name: Semi-supervised recognition of sarcastic sentences in online product reviews. In Proc. of ICWSM. Yulia Tsvetkov, Manaal Faruqui, Wang Ling, Guillaume Lample, and Chris Dyer. 2015. Evaluation of word vector representations by subspace alignment. In Proc. of EMNLP. Ingmar Visser, Maartje EJ Raijmakers, and Peter CM Molenaar. 2001. Hidden markov model interpretations of neural networks. In Connectionist Models of Learning, Development and Evolution, pages 197– 206. Springer. Andrew Viterbi. 1967. Error bounds for convolutional codes and an asymptotically optimum decoding algorithm. IEEE Transactions on Information Theory, 13(2):260–269. Ainur Yessenalina, Yisong Yue, and Claire Cardie. 2010. Multi-level structured models for documentlevel sentiment classification. In Proc. of EMNLP. Wenpeng Yin and Hinrich Sch¨utze. 2015. Multichannel Variable-Size Convolution for Sentence Classification. In Proc. of CoNLL. Dani Yogatama, Lingpeng Kong, and Noah A. Smith. 2015. Bayesian optimization of text representations. In Proc. of EMNLP. Dani Yogatama and Noah A. Smith. 2014. Linguistic structured sparsity in text categorization. In Proc. of ACL. Jason Yosinski, Jeff Clune, Anh Mai Nguyen, Thomas J. Fuchs, and Hod Lipson. 2015. Understanding neural networks through deep visualization. In Proc. of the ICML Deep Learning Workshop. Matthew D. Zeiler and Rob Fergus. 2014. Visualizing and understanding convolutional networks. In Proc. of ECCV. Peng Zhou, Zhenyu Qi, Suncong Zheng, Jiaming Xu, Hongyun Bao, and Bo Xu. 2016. Text classification improved by integrating bidirectional LSTM with two-dimensional max pooling. In Proc. of COLING.
2018
28
Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Long Papers), pages 306–316 Melbourne, Australia, July 15 - 20, 2018. c⃝2018 Association for Computational Linguistics 306 Zero-shot Learning of Classifiers from Natural Language Quantification Shashank Srivastava Igor Labutov Tom Mitchell School of Computer Science Carnegie Mellon University Pittsburgh, PA 15213, USA [email protected] [email protected] [email protected] Abstract Humans can efficiently learn new concepts using language. We present a framework through which a set of explanations of a concept can be used to learn a classifier without access to any labeled examples. We use semantic parsing to map explanations to probabilistic assertions grounded in latent class labels and observed attributes of unlabeled data, and leverage the differential semantics of linguistic quantifiers (e.g., ‘usually’ vs ‘always’) to drive model training. Experiments on three domains show that the learned classifiers outperform previous approaches for learning with limited data, and are comparable with fully supervised classifiers trained from a small number of labeled examples. 1 Introduction As computer systems that interact with us in natural language become pervasive (e.g., Siri, Alexa, Google Home), they suggest the possibility of letting users teach machines in language. The ability to learn from language can enable a paradigm of ubiquitous machine learning, allowing users to teach personalized concepts (e.g., identifying ‘important emails’ or ‘project-related emails’) when limited or no training data is available. In this paper, we take a step towards solving this problem by exploring the use of quantifiers to train classifiers from declarative language. For illustration, consider the hypothetical example of a user explaining the concept of an “important email” through natural language statements (Figure 1). Our framework takes a set of such natural language explanations describing a concept (e.g., “emails that I reply to are usually important”) and a set of unlabeled instances as input, and produces Figure 1: Supervision from language can enable concept learning from limited or even no labeled examples. Our approach assumes the learner has sensors that can extract attributes from data, such as those listed in the table, and language that can refer to these sensors and their values. a binary classifier (for important emails) as output. Our hypothesis is that language describing concepts encodes key properties that can aid statistical learning. These include specification of relevant attributes (e.g., whether an email was replied to), relationships between such attributes and concept labels (e.g., if a reply implies the class label of that email is ‘important’), as well as the strength of these relationships (e.g., via quantifiers like ‘often’, ‘sometimes’, ‘rarely’). We infer these properties automatically, and use the semantics of linguistic quantifiers to drive the training of classifiers without labeled examples for any concept. This is a novel scenario, where previous approaches in semi-supervised and constraint-based learning are not directly applicable. Those approaches require manual pre-specification of expert knowledge for model training. In our approach, this knowledge is automatically inferred from noisy natural language explanations from a user. Our approach is summarized in the schematic in Figure 2. First, we map the set of natural language explanations of a concept to logical forms 307 Figure 2: Our approach to Zero-shot learning from Language. Natural language explanations on how to classify concept examples are parsed into formal constraints relating features to concept labels. The constraints are combined with unlabeled data, using posterior regularization to yield a classifier. that identify the attributes mentioned in the explanation, and describe the information conveyed about the attribute and the concept label as a quantitative constraint. This mapping is done through semantic parsing. The logical forms denote quantitative constraints, which are probabilistic assertions about observable attributes of the data and unobserved concept labels. Here the strength of a constraint is assumed to be specified by a linguistic quantifier (such as ‘all’, ‘some’, ‘few’, etc., which reflect degrees of generality of propositions). Next, we train a classification model that can assimilate these constraints by adapting the posterior regularization framework (Ganchev et al., 2010). Intuitively, this can be seen as defining an optimization problem, where the objective is to find parameter estimates for the classifier that do not simply fit the data, but also agree with the human provided natural language advice to the greatest extent possible. Since logical forms can be grounded in a variety of sensors and external resources, an explicit model of semantic interpretation conceptually allows the framework to subsume a flexible range of grounding behaviors. The main contributions of this work are: 1. We introduce the problem of zero-shot learning of classifiers from language, and present an approach towards this. 2. We develop datasets for zero-shot classification from natural descriptions, exhibiting tasks with various levels of difficulty. 3. We empirically show that coarse probability estimates to model linguistic quantifiers can effectively supervise model training across three domains of classification tasks. 2 Related Work Many notable approaches have explored incorporation of background knowledge into the training of learning algorithms. However, none of them addresses the issue of learning from natural language. Prominent among these are the Constraint-driven learning (Chang et al., 2007a), Generalized Expectation (Mann and McCallum, 2010) and Posterior Regularization (Ganchev et al., 2010) and Bayesian Measurements (Liang et al., 2009) frameworks. All of these require domain knowledge to be manually programmed in before learning. Similarly, Probabilistic Soft Logic (Kimmig et al., 2012) allows users to specify rules in a logical language that can be used for reasoning over graphical models. More recently, multiple approaches have explored fewshot learning from perspective of term or attributebased transfer (Lampert et al., 2014), or learning representations of instances as probabilistic programs (Lake et al., 2015). Other work (Lei Ba et al., 2015; Elhoseiny et al., 2013) considers language terms such as colors and textures that can be directly grounded in visual meaning in images. Some previous work (Srivastava et al., 2017) has explored using language explanations for feature space construction in concept learning tasks, where the problem of learning to interpret language, and learning classifiers is treated jointly. However, this approach assumes availability of labeled data for learning classifiers. Also notable is recent work by Andreas et al. (2017), who propose using language descriptions as parameters to model structure in learning tasks in multiple settings. More generally, learning from language has also been previously explored in tasks such as playing games (Branavan et al., 2012), robot navigation (Karamcheti et al., 2017), etc. Natural language quantification has been studied from multiple perspectives in formal logic (Barwise and Cooper, 1981), linguistics (Löbner, 1987; Bach et al., 2013) and cognitive psychology (Kurtzman and MacDonald, 1993). While quantification has traditionally been defined in set-theoretic terms in linguistic theories1, our approach joins alternative 1e.g., ‘some A are B’ ⇔A ∩B ̸= ∅ 308 perspectives that represent quantifiers probabilistically (Moxey and Sanford, 1993; Yildirim et al., 2013). To the best of our knowledge, this is the first work to leverage the semantics of quantifiers to guide statistical learning models. 3 Learning Classifiers from Language Our approach relies on first mapping natural language descriptions to quantitative constraints that specify statistical relationships between observable attributes of instances and their latent concept labels (Step 1 in Figure 2). These quantitative constraints are then imbued into the training of a classifier by guiding predictions from the learned models to concur with them (Step 2). We use semantic parsing to interpret sentences as quantitative constraints, and adapt the posterior regularization principle for our setting to estimate the classifier parameters. Next, we describe these steps in detail. Since learning in this work is largely driven by the semantics of linguistic quantifiers, we call our approach Learning from Natural Quantification, or LNQ. 3.1 Mapping language to constraints A key challenge in learning from language is converting free-form language to representations that can be reasoned over, and grounded in data. For example, a description such as ‘emails that I reply to are usually important’ may be converted to a mathematical assertion such as P(important | replied : true) = 0.7’, which statistical methods can reason with. Here, we argue that this process can be automated for a large number of real-world descriptions. In interpreting statements describing concepts, we infer the following key elements: 1. Feature x, which is grounded in observed attributes of the data. For our example, ‘emails replied to’ can refer to a predicate such as replied:true, which can be evaluated in context of emails to indicate the whether an email was replied to. Incorporating compositional representations enables more complex reasoning. e.g., ‘the subject of course-related emails usually mentions CS100’ can map to a composite predicate like ‘isStringMatch(field:subject, stringVal(‘CS100’))’ , which can be evaluated for different emails to reflect whether their subject mentions ‘CS100’. Mapping language to executable feature functions has been shown to be effective (Srivastava et al., 2017). For sake of simplicity, here we assume that a statement refers to a single feature, but the method can be extended to handle more complex descriptions. 2. Concept label y, specifying the class of instances a statement refers to. For binary classes, this reduces to examples or non-examples of a concept. For our running example, y corresponds to the positive class of important emails. 3. Constraint-type asserted by the statement. We argue that most concept descriptions belong to one of three categories shown in Table 2, and these constitute our vocabulary of constraint types for this work. For our running example (‘emails that I reply to are usually important’), the type corresponds to P(y | x), since the syntax of the statement indicates an assertion conditioned on the feature indicating whether an email was replied to. On the other hand, an assertion such as ‘I usually reply to important emails’ indicates an assertion conditioned on the set important emails, and therefore corresponds to the type P(x | y). 4. Strength of the constraint. We assume this to be specified by a quantifier. For our running example, this corresponds to the adverb ‘usually’. In this work, by quantifier we specifically refer to frequency adverbs (‘usually’,‘rarely’, etc.) and frequency determiners (‘few’, ‘all’, etc.).2 Our thesis is that the semantics of quantifiers can be leveraged to make statistical assertions about relationships involving attributes and concept labels. One way to do this might be to simply associate point estimates of probability values, suggesting the fraction of truth values for assertions described with these quantifiers. Table 1 shows probability values we assign to some common frequency quantifiers for English. These values were set simply based on the authors’ intuition about their semantics, and do not reflect any empirical distributions. See Figure 8 for empirical distributions corresponding to some linguistic quantifiers in our data. While these probability values maybe inaccurate, and the semantics of these quantifiers may also change based on context and the speaker, they can still serve as a strong signal for learning classifiers since they are not used as hard constraints, but serve to bias classifiers towards better generalization. We use a semantic parsing model to map statements to formal semantic representations that specify these aspects. For example, the statement ‘Emails that I reply to are usually important’ is 2This is a significantly restricted definition, and does not address non-frequency determiners (e.g.,‘the’, ‘only’, etc. ) or mass quantifiers (e.g. ‘a lot’, ‘little’), among other categories. 309 Frequency quantifier Probability all, always, certainly, definitely 0.95 usually, normally, generally, likely, typically 0.70 most, majority 0.60 often, half 0.50 many 0.40 sometimes, frequently, some 0.30 few, occasionally 0.20 rarely, seldom 0.10 never 0.05 Table 1: Probability values we assign to common linguistic quantifiers (hyper-parameters for method) mapped to a logical form like (x→replied:true y→positive type:y|x quant:usually). 3.1.1 Semantic Parser components Given a descriptive statement s, the parsing problem consists of predicting a logical form l that best represents its meaning. In turn, we formulate the probability of the logical form l as decomposing into three component factors: (i) probability of observing a feature and concept labels lxy based on the text of the sentence, (ii) probability of the type of the assertion ltype based on the identified feature, concept label and syntactic properties of the sentence s, and (iii) identifying the linguistic quantifier, lquant, in the sentence. P(l | s) = P(lxy | s) P(ltype | lxy, s) P(lquant | s) We model each of the three components as follows: by using a traditional semantic parser for the first component, training a Max-Ent classifier for the constraint-type for the second component, and looking for an explicit string match to identify the quantifier for the third component. Identifying features and concept labels, lxy: For identifying the feature and concept label mentioned in a sentence, we presume a linear score S(s, lxy) = wT ψ(s, lxy) indicating the goodness of assigning a partial logical form, lxy, to a sentence s. Here, ψ(s, lxy) ∈Rn are features that can depend on both the sentence and the partial logical form, and w ∈Rn is a parameter weight-vector for this component. Following recent work in semantic parsing (Liang et al., 2011), we assume a loglinear distribution over interpretations of a sentence. P(lxy | s) ∝wT ψ(s, lxy) Provided data consisting of statements labeled with logical forms, the model can be trained via maximum likelihood estimation, and be used to predict interpretations for new statements. For training this component, we use a CCG semantic parsing formalism, and follow the feature-set from Zettlemoyer and Collins (2007), consisting of simple indicator features for occurrences of keywords and lexicon entries. This is also compatible with the semantic parsing formalism in Srivastava et al. (2017), whose data (and accompanying lexicon) are also used in our evaluation. For other datasets with predefined features, this component is learned easily from simple lexicons consisting of trigger words for features and labels.3 This component is the only part of the parser that is domain-specific. We note that while this component assumes a domain-specific lexicon (and possibly statement annotated with logical forms), this effort is one-time-only, and will find re-use across the possibly large number of concepts in the domain (e.g., email categories). Identifying assertion type, ltype: The principal novelty in our semantic parsing model is in identifying the type of constraint asserted by a statement. For this, we train a MaxEnt classifier, which uses positional and syntactic features based on the text-spans corresponding to feature and concept mentions to predict the constraint type. We extract the following features from a statement: 1. Boolean value indicating whether the text-span corresponding to the feature x precedes the text span for the concept label y. 2. Boolean value indicating if sentence is in passive (rather than active) voice, as identified by the occurrence of nsubjpass dependency relation. 3. Boolean value indicating whether head of the text-span for x is a noun, or a verb. 4. Features indicating the occurrence of conditional tokens (‘if’, ‘then’ and ‘that’) preceding or following text-spans for x and y. 5. Features indicating presence of a linguistic quantifier in a det or an advmod relation with syntactic head of x or y. Since the constraint type is determined by syntactic and dependency parse features, this 3We also identify whether a feature x is negated, through the existence of a neg dependency relation with the head of its text-span. e.g., Important emails are usually not deleted 310 Type Example description Conversion to Expectation Constraint P(y | x) Emails that I reply to are usually important E[Iy=important,reply(x):true] −pusually × E[Ireply(x):true] = 0 P(x | y) I often reply to important emails E[Iy=important,reply(x):true] −poften × E[Iy=important] = 0 P(y) I rarely get important emails Same as P(y|x0), where x0 is a constant feature Table 2: Common constraint-types, and their representation as expectations over feature values component does not need to be retrained for new domains. In this work, we trained this classifier based on a manually annotated set of 80 sentences describing classes in the small UCI Zoo dataset (Lichman, 2013), and used this model for all experiments. Identifying quantifiers, lquant: Multiple linguistic quantifiers in a sentence are rare, and we simply look for the first occurrence of a linguistic quantifier in a sentence, i.e. P(lquant|s) is a deterministic function. We note that many real world descriptions of concepts lack an explicit quantifier. e.g., ‘Emails from my boss are important’. In this work, we ignore such statements for the purpose of training. Another treatment might be to models these statements as reflecting a default quantifier, but we do not explore this direction here. Finally, the decoupling of quantification from logical representation is a key decision. At the cost of linguistic coarseness, this allows modeling quantification irrespective of the logical representation (lambda calculus, predicate-argument structures, etc.). 3.2 Classifier training from constraints In the previous section, we described how individual explanations can be mapped to probabilistic assertions about observable attributes (e.g., the statement ‘Emails that I reply to are usually important’ may map to P(y = important | replied = true) = pusually). Here, we describe how a set of such assertions can be used in conjunction with unlabeled data to train classification models. Our approach relies on having predictions from the classifier on a set of unlabeled examples (X = {x1 . . . xn}) agree with human-provided advice (in form of constraints). The unobserved concept labels (Y = {y1 . . . yn}) for the unlabeled data constitute latent variables for our method. The training procedure can be seen as iteratively inferring the latent concept labels for unlabeled examples so as to agree with the human advice, and updating the classification models by taking these labels as given. While there are multiple approaches for training statistical models with constraints on latent variables, here we use the Posterior Regularization (PR) framework. The PR objective can be used to optimize a latent variable model subject to a set of constraints, which specify preferences for values of the posterior distributions pθ(Y | X). JQ(θ) = L(θ) −minq∈Q KL(q | pθ(Y |X)) Here, the set Q represents a set of preferred posterior distributions over latent variables Y , and is defined as Q := {qX(Y ) : Eq[φ(X, Y )] ≤b}. The overall objective consists of two components, representing how well does a model θ explain the data (likelihood term L(θ)), and how far it is from the set Q (KL-divergence term). In our case, each parsed statement defines a probabilistic constraint. The conjunction of all such constraints defines Q (representing models that exactly agree with human-provided advice). Thus, optimizing the objective reflects a tension between choosing models that increase data likelihood, and emulating language advice. Converting to PR constraints: The set of constraints that PR can handle can be characterized as bounds on expected values of functions (φ) of X and Y (or equivalently, from linearity of expectation, as linear inequalities over expected values of functions of X and Y ). To use the framework, we need to ensure that each constraint type in our vocabulary can be expressed in such a form. Following the plan in Table 2, each constraint type can be converted in an equivalent form Eq[φ(X, Y )] = b, compatible with PR. In particular, each of these constraint types in our vocabulary can be expressed as equations about expectation values of joint indicator functions of label assignments to instances and their attributes. To explain, consider the assertion P(y = important | replied : true) = pusually. The probability on the LHS can be expressed as the empirical fraction P i E[Iyi=important,replied:true] P i E[Ireplied:true] , which leads to the linear constraints seen in Table 2 (expected values in the table hide summations over instances for brevity). Here, I denote indicator functions. Thus, we can incorporate probability constraints into our 311 adaptation of the PR scheme. Learning and Inference: We choose a loglinear parameterization for the concept classifier. pθ(yi | xi) ∝exp(yθT x) The training of the classifier follows the modified EM procedure described in Ganchev et al. (2010). As proposed in the original work, we solve a relaxed version of the optimization that allows slack variables, and modifies the PR objective with a L2 regularizer. This allows solutions even when the problem is over-constrained, and the set Q is empty (e.g. due to contradictory advice). J′(θ, q) = L(θ) −KL(q|pθ(Y |X)) −λ ||Eq[φ(X, Y )] −b||2 The key step in the training is the computation of the posterior regularizer in the E-step. argmin q KL(q | pθ) + λ ||Eq[φ(X, Y )] −b||2 This objective is strictly convex, and all constraints are linear in q. We follow the optimization procedure from Bellare et al. (2009), whereby the minimization problem in the E-step can be efficiently solved through gradient steps in the dual space. In the M-step, we update the model parameters for the classifier based on label distributions q estimated in the E-step. This simply reduces to estimating the parameters θ for the logistic regression classifier, when class label probabilities are known. In all experiments, we run EM for 20 iterations and use a regularization coefficient of λ = 0.1. 4 Datasets For evaluating our approach, we created datasets of classification tasks paired with descriptions of the classes, as well as used some existing resources. In this section, we summarize these steps. Shapes data: To experiment with our approach in a wider range of controlled settings, part of our evaluation focuses on synthetic concepts. For this, we created a set of 50 shape classification tasks that exhibit a range of difficulty, and elicited language descriptions spanning a variety of quantifier expressions. The tasks require classifying geometric shapes with a set of predefined attributes (fill color, border, color, shape, size) into two concept-labels (abstractly named ‘selected shape’, and ‘other’). The datasets were created through a generative process, where features xi are conditionally independent given the concept-label. Each feature’s conditional distribution is sampled from a symmetric (a) Statement generation task (b) Concept Quiz Figure 3: Shapes data: Mechanical Turk tasks for (a) collecting concept descriptions, and (b) human evaluation from concept descriptions Dirichlet distribution, and varying the concentration parameter α allows tuning the noise level of the generated datasets (quantified via their Bayes Optimal accuracy4). A dataset is then generated by sampling from these conditional distributions. We sample a total of 50 such datasets, consisting of 100 training and 100 test examples each, where each example is a shape and its assigned label. For each dataset, we then collected statements from Mechanical Turk workers that describe the concept. The task required turkers to study a sample of shapes presented on the screen for each of the two concept-labels (see Figure 3(a)). They were then asked to write a set of statements that would help others classify these shapes without seeing the data. In total, 30 workers participated in this task, generating a mean of 4.3 statements per dataset. Email data: Srivastava et al. (2017) provide a dataset of language explanations from human users describing 7 categories of emails, as well as 1030 examples of emails belonging to those categories. While this work uses labeled examples, and focuses 4This is the accuracy of a theoretically optimal classifier, which knows the true distribution of the data and labels 312 Shapes: If a shape doesn’t have a blue border, it is probably not a selected shape. Selected shapes occasionally have a yellow fill. Emails: Emails that mention the word ’meet’ in the subject are usually meeting requests Personal reminders almost always have the same recipient and sender Birds: A specimen that has a striped crown is likely to be a selected bird. Birds in the other category rarely ever have dagger-shaped beaks Table 3: Examples of explanations for each domain Figure 4: Statement generation task for Birds data on mapping natural language explanations (∼30 explanations per email category) to compositional feature functions, we can also use statements in their data for evaluating our approach. While language quantifiers were not studied in the original work, we found about a third of the statements in this data to mention a quantifier. Birds data: The CUB-200 dataset (Wah et al., 2011) contains images of birds annotated with observable attributes such as size, primary color, wing-patterns, etc. We selected a subset of the data consisting of 10 species of birds and 53 attributes (60 examples per species). Turkers were shown examples of birds from a species, and negative examples consisting of a mix of birds from other Approach Avg Accuracy Labels Descriptions LNQ 0.751 no yes Bayes Optimal 0.831 – – FLGE+ 0.659 no yes FLGE 0.598 no yes LR 0.737 yes no Random 0.524 – – Ablation: LNQ (coarse quant) 0.679 no yes LNQ (no quant) 0.545 no yes Human: Human teacher 0.802 yes writes Human learner 0.734 no yes Table 4: Classification performance on Shapes datasets (averaged over 50 classification tasks). species, and were asked to describe the classes (similar to the Shapes data, see Figure 4). During the task, users also had access to a table enumerating groundable attributes they could refer to. In all, 60 workers participated, generating 6.1 statements on average. 5 Experiments Incorporating constraints from language has not been addressed before, and hence previous approaches for learning from limited data such as Mann and McCallum (2010); Chang et al. (2007b) would not directly work for this setting. Our baselines hence consist of extended versions of previous approaches that incorporate output from the parser, as well as fully supervised classifiers trained from a small number of labeled examples. Classification performance: The top section in Table 4 summarizes performance of various classifiers on the Shape datasets, averaged over all 50 classification tasks. FLGE+ refers to a baseline Figure 5: LNQ vs Bayes Optimal Classifier performance for Shape datasets. Each dot represents a dataset generated from a known distribution. 313 that uses the Feature Labeling through Generalized Expectation criterion, following the approach in Druck et al. (2008); Mann and McCallum (2010). The approach is based on labeling features are indicating specific class-labels, which corresponds to specifiying constraints of type P(y|x)5. While the original approach (Druck et al., 2008) sets this value to 0.9, we provide the method the quantitative probabilities used by LNQ. Since the original method cannot handle language descriptions, we also provide the approach the concept label y and feature x as identified by the parser. FLGE represents the version that is not provided quantifier probabilities. LR refers to a supervised logistic regression model trained on n = 8 randomly chosen labeled instances.6 We note that LNQ performs substantially better than both FLGE+ and LR on average. This validates our modeling principle for learning classifiers from explanations alone, and also suggests value in our PR-based formulation, which can handle multiple constraint types. We further note that not using quantifier probabilities significantly deteriorates FLGE’s performance. Figure 5 provides a more detailed characterization of LNQ’s performance. Each blue dot represents performance on a shape classification task. The horizontal axis represents the accuracy of the Bayes Optimal classifier, and the vertical represents accuracy of the LNQ approach. The blue line represents the trajectory for x = y, representing a perfect statistical classifier in the asymptotic case of infinite samples. We note that LNQ is effective in learning competent classifiers for all levels of hardness. Secondly, except for a small number of outliers, the approach works especially well for learning easy concepts (towards the right). From an error-analysis, we found that a majority of these errors are due to problems in parsing (e.g., missed negation, incorrect constraint type) or due to poor explanations from the teacher (bad grammar, or simply incorrect information). Figure 6 shows results for email classification tasks. In the figure, LN* refers to the approach in Srivastava et al. (2017), which uses natural language descriptions to define compositional features for email classification, but does not incorporate 5In general, Generalized Expectation can also handle broader constraint types, similar to Posterior Regularization 6LNQ models are indistinct from LR w.r.t. parametrization, but trained to maximize a different objective. The choice of n here is arbitrary, but is roughly twice the number of explanations for each task in this domain Figure 6: Classification performance (F1) on Email data. (LN* Results from Srivastava et al. (2017)) supervision from quantification. For this task, we found very few of the natural language descriptions to contain quantifiers for some of the individual email categories, making a direct comparison impractical. Thus in this case, we evaluate methods by combining supervision from descriptions in addition to 10 labeled examples (also in line with evaluation in the original paper). We note that additionally incorporating quantification (LNQ) consistently improves classification performance across email categories. On this task, LNQ improves upon FLGE+ and LN* for 6 of the 7 email categories. Figure 7 shows classification results on the Birds data. Here, LR refers to a logistic regression model trained on n=10 examples. The trends in this case are similar, where LNQ consistently outperforms FLGE+, and is competitive with LR. Ablating quantification: From Table 4, we further observe that the differential associative strengths of linguistic quantifiers are crucial for our method’s classification performance. LNQ (no quant) refers to a variant that assigns the same probability value (average of values in Table 1), irrespective of quantifier. This yields a near random performance, which is what we’d expect if the learning is being driven by the differential strengths of quantifiers. LNQ (coarse quant) refers to a variant that rounds assigned quantifier probabilities in Table 1 to 0 or 1. (i.e., quantifiers such are rarely get mapped to 0, while always gets mapped to a probability of 1). While its performance (0.679) suggests that simple binary feedback is a substantial signal, the difference from the full model indicates value in using soft probabilities. On the other hand, in a sensitivity study, we found the performance of the approach to be robust to small changes in the probability values of quantifiers. Comparison with human performance: For the Shapes data, we evaluated human teachers’ own understanding of concepts they teach by evaluating 314 Figure 7: Classification performance on Birds data them on a quiz based on predicting labels for examples from the test set (see Figure 3(b)). Second, we solicit additional workers that were not exposed to examples from the dataset, and present them only with the statements describing that data (created by a teacher), which is comparable supervision to what LNQ receives. We then evaluate their performance at the same task. From Table 4, we note that a human teacher’s average performance is significantly worse (p < 0.05, Wilcoxon signed-rank test) than the Bayes Optimal classifier indicating that the teacher’s own synthesis of concepts is noisy. The human learner performance is expectedly lower, but interestingly is also significantly worse than LNQ. While this might be potentially be caused by factors such as user fatigue, this might also suggest that automated methods can be better at reasoning with constraints than humans in certain scenarios. These results need to be validated through comprehensive experiments in more domains. Empirical semantics of quantifiers: We can estimate the distributions of probability values for different quantifiers from our labeled data. For this, we aggregate sentences mentioning a quantifier, and calculate the empirical value of the (conditional) probability associated with the statement, leading to a set of probability values for each quantifier. Figure 8 shows empirical distributions of probability values for six quantifiers. We note that while a few estimates (e.g., ‘rarely’ and ‘often’) roughly align with pre-registered beliefs, others are somewhat off (e.g., ‘likely’ shows a much higher value), and yet others (e.g., ‘sometimes’) show a large spread of values to be meaningfully modeled as point values. LNQ’s performance, inspite of this, shows strong stability in the approach. We don’t use these empirical probabilities in experiments, (instead of pre-registered values), so as not to tune the hyperparameters to a specific dataset. Figure 8: Empirical probability distributions for six quantifiers (Shapes data). Plots show Beta distributions with Method-of-Moment estimates. Red bars correspond to values from Table 1 Such estimates would not be available for a new task without labeled data. Further, using labeled data for estimating these probabilities, and then using the learned model for predicting labels would constitute overfitting, biasing evaluation. 6 Discussion and Future Work Our approach is surprisingly effective in learning from free-form language. However, it does not address linguistic issues such as modifiers (e.g., very likely), nested quantification, etc. On the other hand, we found no instances of nested quantification in the data, suggesting that people might be primed to use simpler language when teaching. While we approximate quantifier semantics as absolute probability values, they may vary significantly based on the context, as shown by cognitive studies such as Newstead and Collis (1987). Future work can model how these parameters can be adapted in a task specific way (e.g., cases such as cancer prediction where base rates are small), and provide better models of quantifier semantics. e.g., as distributions, rather than point values. Our approach is a step towards the idea of using language to guide learning of statistical models. This is an exciting direction, which contrasts with the predominant theme of using statistical learning methods to advance the field of NLP. We believe that language may have as much to help learning, as statistical learning has helped NLP. Acknowledgments This research was supported by the CMU - Yahoo! InMind project. The authors would also like to thank the anonymous reviewers for helpful comments and suggestions. 315 References Jacob Andreas, Dan Klein, and Sergey Levine. 2017. Learning with latent language. CoRR abs/1711.00482. http://arxiv.org/abs/1711.00482. Elke Bach, Eloise Jelinek, Angelika Kratzer, and Barbara BH Partee. 2013. Quantification in natural languages, volume 54. Springer Science & Business Media. Jon Barwise and Robin Cooper. 1981. Generalized quantifiers and natural language. Linguistics and philosophy 4(2):159–219. Kedar Bellare, Gregory Druck, and Andrew McCallum. 2009. Alternating projections for learning with expectation constraints. In Proceedings of the TwentyFifth Conference on Uncertainty in Artificial Intelligence. AUAI Press, pages 43–50. SRK Branavan, David Silver, and Regina Barzilay. 2012. Learning to win by reading manuals in a monte-carlo framework. Journal of Artificial Intelligence Research 43:661–704. Ming-Wei Chang, Lev Ratinov, and Dan Roth. 2007a. Guiding semi-supervision with constraint-driven learning. In Proceedings of the 45th Annual Meeting of the Association of Computational Linguistics. Association for Computational Linguistics, Prague, Czech Republic, pages 280–287. http://www.aclweb.org/anthology/P07-1036. Ming-Wei Chang, Lev Ratinov, and Dan Roth. 2007b. Guiding semi-supervision with constraint-driven learning. In ACL. pages 280–287. Gregory Druck, Gideon Mann, and Andrew McCallum. 2008. Learning from labeled features using generalized expectation criteria. In Proceedings of the 31st annual international ACM SIGIR conference on Research and development in information retrieval. ACM, pages 595–602. Mohamed Elhoseiny, Babak Saleh, and Ahmed Elgammal. 2013. Write a classifier: Zero-shot learning using purely textual descriptions. In The IEEE International Conference on Computer Vision (ICCV). Kuzman Ganchev, Jennifer Gillenwater, Ben Taskar, et al. 2010. Posterior regularization for structured latent variable models. Journal of Machine Learning Research 11(Jul):2001–2049. Siddharth Karamcheti, Edward C Williams, Dilip Arumugam, Mina Rhee, Nakul Gopalan, Lawson LS Wong, and Stefanie Tellex. 2017. A tale of two draggns: A hybrid approach for interpreting actionoriented and goal-oriented instructions. arXiv preprint arXiv:1707.08668 . Angelika Kimmig, Stephen H. Bach, Matthias Broecheler, Bert Huang, and Lise Getoor. 2012. A short introduction to probabilistic soft logic. In NIPS Workshop on Probabilistic Programming: Foundations and Applications. Howard S Kurtzman and Maryellen C MacDonald. 1993. Resolution of quantifier scope ambiguities. Cognition 48(3):243–279. Brenden M Lake, Ruslan Salakhutdinov, and Joshua B Tenenbaum. 2015. Human-level concept learning through probabilistic program induction. Science 350(6266):1332–1338. Christoph H Lampert, Hannes Nickisch, and Stefan Harmeling. 2014. Attribute-based classification for zero-shot visual object categorization. IEEE Transactions on Pattern Analysis and Machine Intelligence 36(3):453–465. Jimmy Lei Ba, Kevin Swersky, Sanja Fidler, and Ruslan Salakhutdinov. 2015. Predicting deep zero-shot convolutional neural networks using textual descriptions. In The IEEE International Conference on Computer Vision (ICCV). Percy Liang, Michael I Jordan, and Dan Klein. 2009. Learning from measurements in exponential families. In Proceedings of the 26th annual international conference on machine learning. ACM, pages 641– 648. Percy Liang, Michael I Jordan, and Dan Klein. 2011. Learning dependency-based compositional semantics. In Proceedings of the 49th Annual Meeting of the Association for Computational Linguistics: Human Language Technologies-Volume 1. Association for Computational Linguistics, pages 590–599. M. Lichman. 2013. UCI machine learning repository. http://archive.ics.uci.edu/ml. Sebastian Löbner. 1987. Quantification as a major module of natural language semantics. Studies in discourse representation theory and the theory of generalized quantifiers 8:53. Gideon S Mann and Andrew McCallum. 2010. Generalized expectation criteria for semi-supervised learning with weakly labeled data. Journal of machine learning research 11(Feb):955–984. Linda M Moxey and Anthony J Sanford. 1993. Prior expectation and the interpretation of natural language quantifiers. European Journal of Cognitive Psychology 5(1):73–91. Stephen E Newstead and Janet M Collis. 1987. Context and the interpretation of quantifiers of frequency. Ergonomics 30(10):1447–1462. Shashank Srivastava, Igor Labutov, and Tom Mitchell. 2017. Joint concept learning and semantic parsing from natural language explanations. In Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing. Association for Computational Linguistics, pages 1528–1537. http://aclweb.org/anthology/D17-1161. C. Wah, S. Branson, P. Welinder, P. Perona, and S. Belongie. 2011. The Caltech-UCSD Birds-200-2011 Dataset. Technical report. 316 Ilker Yildirim, Judith Degen, Michael K Tanenhaus, and T Florian Jaeger. 2013. Linguistic variability and adaptation in quantifier meanings. In CogSci. Luke S Zettlemoyer and Michael Collins. 2007. Online learning of relaxed ccg grammars for parsing to logical form. In EMNLP-CoNLL. pages 678–687.
2018
29
Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Long Papers), pages 23–33 Melbourne, Australia, July 15 - 20, 2018. c⃝2018 Association for Computational Linguistics 23 Unsupervised Learning of Distributional Relation Vectors Shoaib Jameel School of Computing Medway Campus University of Kent, UK [email protected] Zied Bouraoui CRIL CNRS and Artois University France [email protected] Steven Schockaert School of Computer Science and Informatics Cardiff University, UK [email protected] Abstract Word embedding models such as GloVe rely on co-occurrence statistics to learn vector representations of word meaning. While we may similarly expect that cooccurrence statistics can be used to capture rich information about the relationships between different words, existing approaches for modeling such relationships are based on manipulating pre-trained word vectors. In this paper, we introduce a novel method which directly learns relation vectors from co-occurrence statistics. To this end, we first introduce a variant of GloVe, in which there is an explicit connection between word vectors and PMI weighted co-occurrence vectors. We then show how relation vectors can be naturally embedded into the resulting vector space. 1 Introduction Word embeddings are vector space representations of word meaning (Mikolov et al., 2013b; Pennington et al., 2014). A remarkable property of these models is that they capture various lexical relationships, beyond mere similarity. For example, (Mikolov et al., 2013b) found that analogy questions of the form “a is to b what c is to ?” can often be answered by finding the word d that maximizes cos(wb −wa +wc, wd), where we write wx for the vector representation of a word x. Intuitively, the word vector wa represents a in terms of its most salient features. For example, wparis implicitly encodes that Paris is located in France and that it is a capital city, which is intuitively why the ‘capital of’ relation can be modeled in terms of a vector difference. Other relationships, however, such as the fact that Macron succeeded Hollande as president of France, are unlikely to be captured by word embeddings. Relation extraction methods can discover such information by analyzing sentences that contain both of the words or entities involved (Mintz et al., 2009; Riedel et al., 2010; dos Santos et al., 2015), but they typically need a large number of training examples to be effective. A third alternative, which we consider in this paper, is to characterize the relatedness between two words s and t by learning a relation vector rst in an unsupervised way from corpus statistics. Among others, such vectors can be used to find word pairs that are similar to a given word pair (i.e. finding analogies), or to find the most prototypical examples among a given set of relation instances. They can also be used as an alternative to the aforementioned relation extraction methods, by subsequently training a classifier that uses the relation vectors as input, which might be particularly effective in cases where only limited amounts of training data are available (with the case of analogy finding from a single instance being an extreme example). The most common unsupervised approach for learning relation vectors consists of averaging the embeddings of the words that occur in between s and t, in sentences that contain both (Weston et al., 2013; Fan et al., 2015; Hashimoto et al., 2015). While this strategy is often surprisingly effective (Hill et al., 2016), it is sub-optimal for two reasons. First, many of the words co-occurring with s and t will be semantically related to s or to t, but will not actually be descriptive for the relationship between s and t; e.g. the vector describing the relation between Paris and France should not be affected by words such as eiffel (which only relates to Paris). Second, it gives too much weight to stopwords, which cannot be addressed in a straightforward way as some stop-words are actually crucial for modeling relationships (e.g. prepositions such 24 as ‘in’ or ‘of’ or Hearst patterns (Indurkhya and Damerau, 2010)). In this paper, we propose a method for learning relation vectors directly from co-occurrence statistics. We first introduce a variant of GloVe, in which word vectors can be directly interpreted as smoothed PMI-weighted bag-of-words representations. We then represent relationships between words as weighted bag-of-words representations, using generalizations of PMI to three arguments, and learn vectors that correspond to smoothed versions of these representations. As far as the possible applications of our methodology is concerned, we imagine that relation vectors can be used in various ways to enrich the input to neural network models. As a simple example, in a question answering system, we could “annotate” mentions of entities with relation vectors encoding their relationship to the different words from the question. As another example, we could consider a recommendation system which takes advantage of vectors expressing the relationship between items that have been bought (or viewed) by a customer and other items from the catalogue. Finally, relation vectors should also be useful for knowledge completion, especially in cases where few training examples per relation type are given (meaning that neural network models could not be used) and where relations cannot be predicted from the already available knowledge (meaning that knowledge graph embedding methods could not be used, or are at least not sufficient). 2 Related Work The problem of characterizing the relationship between two words has been studied in various settings. From a learning point of view, the most straightforward setting is where we are given labeled training sentences, with each label explicitly indicating what relationship is expressed in the sentence. This fully supervised setting has been the focus of several evaluation campaigns, including as part of ACE (Doddington et al., 2004) and at SemEval 2010 (Hendrickx et al., 2010). A key problem with this setting, however, is that labeled training data is hard to obtain. A popular alternative is to use known instances of the relations of interest as a form of distant supervision (Mintz et al., 2009; Riedel et al., 2010). Some authors have also considered unsupervised relation extraction methods (Shinyama and Sekine, 2006; Banko et al., 2007), in which case the aim is essentially to find clusters of patterns that express similar relationships, although these relationships may not correspond to the ones that are needed for the considered application. Finally, several systems have also used bootstrapping strategies (Brin, 1998; Agichtein and Gravano, 2000; Carlson et al., 2010), where a small set of instances are used to find extraction patterns, which are used to find more instances, which can in turn be used to find better extraction patterns, etc. Traditionally, relation extraction systems have relied on a variety of linguistic features, such as lexical patterns, part-of-speech tags and dependency parsers. More recently, several neural network architectures have been proposed for the relation extraction problem. These architectures rely on word embeddings to represent the words in the input sentence, and manipulate these word vectors to construct a relation vector. Some approaches simply represent the sentence (or the phrase connecting the entities whose relationship we want to determine) as a sequence of words, and use e.g. convolutional networks to aggregate the vectors of the words in this sequence (Zeng et al., 2014; dos Santos et al., 2015). Another possibility, explored in (Socher et al., 2012), is to use parse trees to capture the structure of the sentence, and to use recursive neural networks (RNNs) to aggregate the word vectors in a way which respects this structure. A similar approach is taken in (Xu et al., 2015), where LSTMs are applied to the shortest path between the two target words in a dependency parser. A straightforward baseline method is to simply take the average of the word vectors (Mitchell and Lapata, 2010). While conceptually much simpler, variants of this approach have obtained state-of-the-art performance for relation classification (Hashimoto et al., 2015) and a variety of tasks that require sentences to be represented as a vector (Hill et al., 2016). Given the effectiveness of word vector averaging, in (Kenter et al., 2016) a model was proposed that explicitly tries to learn word vectors that generalize well when being averaged. Similarly, the model proposed in (Hashimoto et al., 2015) aims to produce word vectors that perform well for the specific task of relation classification. The ParagraphVector method from (Le and Mikolov, 2014) is related to the aformentioned approaches, but it explicitly learns a vector representation for each 25 paragraph along with the word embeddings. However, this method is computationally expensive, and often fails to outperform simpler approaches (Hill et al., 2016). To the best of our knowledge, existing methods for learning relation vectors are all based on manipulating pre-trained word vectors. In contrast, we will directly learn relation vectors from corpus statistics, which will have the important advantage that we can focus on words that describe the interaction between the two words s and t, i.e. words that commonly occur in sentences that contain both s and t, but are comparatively rare in sentences that only contain s or only contain t. Finally, note that our work is fundamentally different from Knowledge Graph Embedding (KGE) (Wang et al., 2014b), (Wang et al., 2014a), (Bordes et al., 2011) in at least two ways: (i) KGE models start from a structured knowledge graph whereas we only take a text corpus as input, and (ii) KGE models represent relations as geometric objects in the “entity embedding” itself (e.g. as translations, linear maps, combinations of projections and translations, etc), whereas we represent words and relations in different vector spaces. 3 Word Vectors as PMI Encodings Our approach to relation embedding is based on a variant of the GloVe word embedding model (Pennington et al., 2014). In this section, we first briefly recall the GloVe model itself, after which we discuss our proposed variant. A key advantage of this variant is that it allows us to directly interpret word vectors in terms of the Pointwise Mutual Information (PMI), which will be central to the way in which we learn relation vectors. 3.1 Background The GloVe model (Pennington et al., 2014) learns a vector wi for each word i in the vocabulary, based on a matrix of co-occurrence counts, encoding how often two words appear within a given window. Let us write xij for the number of times word j appears in the context of word i in some text corpus. More precisely, assume that there are m sentences in the corpus, and let Pl i ⊆{1, ..., nl} be the set of positions from the lth sentence where the word i can be found (with nl the length of the sentence). Then xij is defined as follows: m X l=1 X p∈Pl i X q∈Pl j weight(p, q) where weight(p, q) = 1 |p−q| if 0 < |p −q| ≤W, and weight(p, q) = 0 otherwise, where the window size W is usually set to 5 or 10. The GloVe model learns for each word i two vectors wi and ˜wi by optimizing the following objective: X i X j:xij̸=0 f(xij)(wi· ˜ wj + bi + ˜bj −log xij)2 where f is a weighting function, aimed at reducing the impact of rare terms, and bi and ˜bj are bias terms. The GloVe model is closely related to the notion of pointwise mutual information (PMI), which is defined for two words i and j as PMI(i, j) = log P(i,j) P(i)P(j)  , where P(i, j) is the probability of seeing the words i and j if we randomly pick a word position from the corpus and a second word position within distance W from the first position. The PMI between i and j is usually estimated as follows: PMIX(i, j) = log xijx∗∗ xi∗x∗j  where xi∗= P j xij, x∗j = P i xij and x∗∗= P i P j xij. In particular, it is straightforward to see that after the reparameterization given by bi 7→ bi + log xi∗−log x∗∗and bj 7→bj + log x∗j, the GloVe model is equivalent to X i X j xij̸=0 f(xij)(wi· ˜ wj + bi + ˜bj −PMIX(i, j))2 (1) 3.2 A Variant of GloVe In this paper, we will use the following variant of the formulation in (1): X i X j∈Ji 1 σ2 j (wi· ˜ wj + ˜bj −PMIS(i, j))2 (2) Despite its similarity, this formulation differs from the GloVe model in a number of important ways. First, we use smoothed frequency counts instead of the observed frequency counts xij. In particular, the PMI between words i and j is given as: PMIS(i, j) = log  P(i, j) P(i)P(j)  26 where the probabilities are estimated as follows: P(i) = xi∗+ α x∗∗+ nα P(j) = x∗j + α x∗∗+ nα P(i, j) = xij + α x∗∗+ n2α where α ≥0 is a parameter controlling the amount of smoothing and n is the size of the vocabulary. This ensures that the estimation of PMI(i, j) is well-defined even in cases where xij = 0, meaning that we no longer have to restrict the inner summation to those j for which xij > 0. For efficiency reasons, in practice, we only consider a small subset of all context words j for which xij = 0, which is similar in spirit to the use of negative sampling in Skip-gram (Mikolov et al., 2013b). In particular, the set Ji contains each j such that xij > 0 as well as M uniformly1 sampled context words j for which xij = 0, where we choose M = 2 · |{j : xij > 0}|. Second, following (Jameel and Schockaert, 2016), the weighting function f(xij) has been replaced by 1 σ2 j , where σ2 j is the residual variance of the regression problem for context word j, estimated follows: σ2 j = 1 |J−1 j | X i∈J−1 j (wi · ˜ wj + ˜bj −PMIS(i, j))2 with J−1 j = {i : j ∈Ji}. Since we need the word vectors to estimate this residual variance, we reestimate σ2 j after every five iterations of the SGD optimization. For the first 5 iterations, where no estimation for σ2 j is available, we use the GloVe weighting function. The use of smoothed frequency counts and residual variance based weighting make the word embedding model more robust for rare words. For instance, if w only co-occurs with a handful of other terms, it is important to prioritize the most informative context words, which is exactly what the use of the residual variance achieves, i.e. σ2 j is small for informative terms and large for stop words; see (Jameel and Schockaert, 2016). This will be important for modeling relations, as the relation vectors will often have to be estimated from very sparse co-occurrence counts. 1While the negative sampling method used in Skip-gram favors more frequent words, initial experiments suggested that deviating from a uniform distribution almost had no impact in our setting. Finally, the bias term bi has been omitted from the model in (2). We have empirically found that omitting this bias term does not affect the performance of the model, while it allows us to have a more direct connection between the vector wi and the corresponding PMI scores. 3.3 Word Vectors and PMI Let us define PMIW as follows: PMIW (i, j) = wi· ˜ wj + ˜bj Clearly, when the word vectors are trained according to (2), it holds that PMIW (i, j) ≈PMIS(i, j). In other words, we can think of the word vector wi as a low-dimensional encoding of the vector (PMIS(i, 1), ..., PMIS(i, n)), with n the number of words in the vocabulary. This view allows us to assign a natural interpretation to some word vector operations. In particular, the vector difference wi −wk is commonly used as a model for the relationship between words i and k. For a given context word j, we have (wi −wk) · ˜wj = PMIW (i, j) −PMIW (k, j) The latter is an estimation of log  P(i,j) P(i)P(j)  − log  P(k,j) P(k)P(j)  = log  P(j|i) P(j|k)  . In other words, the vector translation wi −wk encodes for each context word j the (log) ratio of the probability of seeing j in the context of i and in the context of k, which is in line with the original motivation underlying the GloVe model (Pennington et al., 2014). In the following section, we will propose a number of alternative vector representations for the relationship between two words, based on generalizations of PMI to three arguments. 4 Learning Global Relation Vectors We now turn to the problem of learning a vector rik that encodes how the source word i and target word k are related. The main underlying idea is that rik will capture which context words j are most closely associated with the word pair (i, k). Whereas the GloVe model is based on statistics about (main word, context word) pairs, here we will need statistics on (source word, context word, target word) triples. First, we discuss how cooccurrence statistics among three words can be expressed using generalizations of PMI to three arguments. Then we explain how this can be used to learn relation vectors in natural way. 27 4.1 Co-occurrence Statistics for Triples Let Pl i ⊆{1, ..., nl} again be the set of positions from the lth sentence corresponding to word i. We define: yijk = m X l=1 X p∈Pl i X q∈Pl j X r∈Pl k weight(p, q, r) where weight(p, q, r) = max( 1 q−p, 1 r−q) if p < q < r and r−p ≤W, and weight(p, q, r) = 0 otherwise. In other words, yijk reflects the (weighted) number of times word j appears between words i and k in a sentence in which i and k occur sufficiently close to each other, in that order. Note that by taking word order into account in this way, we will be able to model asymmetric relationships. To model how strongly a context word j is associated with the word pair (i, k), we will consider the following two well-known generalizations of PMI to three arguments (Van de Cruys, 2011): SI1(i, j, k) = log  P(i, j)P(i, k)P(j, k) P(i)P(j)P(k)P(i, j, k)  SI2(i, j, k) = log  P(i, j, k) P(i)P(j)P(k)  where P(i, j, k) is the probability of seeing the word triple (i, j, k) when randomly choosing a sentence and three (ordered) word positions in that sentence within a window size of W. In addition we will also consider two ways in which PMI can be used more directly: SI3(i, j, k) = log  P(i, j, k) P(i, k)P(j)  SI4(i, j, k) = log  P(i, k|j) P(i|j)P(k|j)  Note that SI3(i, j, k) corresponds to the PMI between (i, k) and j, whereas SI4(i, j, k) is the PMI between i and k conditioned on the fact that j occurs. The measures SI3 and SI4 are closely related to SI1 and SI2 respectively2. In particular, the following identities are easy to show: PMI(i, j) + PMI(j, k) −SI1(i, j, k) = SI3(i, j, k) SI2(i, j, k) −PMI(i, j) −PMI(j, k) = SI4(i, j, k) 2Note that probabilities of the form P(i, j) or P(i) here refer to marginal probabilities over ordered triples. In contrast, the PMI scores from the word embedding model are based on probabilities over unordered word pairs, as is common for word embeddings. Using smoothed versions of the counts yijk, we can use the following probability estimates for SI1(i, j, k)–SI4(i, j, k): P(i, j, k) = yijk + α y∗∗∗+ n3α P(i, j) = yij∗+ α y∗∗∗+ n2α P(i, k) = yi∗k + α y∗∗∗+ n2α P(j, k) = y∗jk + α y∗∗∗+ n2α P(i) = yi∗∗+ α y∗∗∗+ nα P(j) = y∗j∗+ α y∗∗∗+ nα P(k) = y∗∗k + α y∗∗∗+ nα where yij∗= P k yijk, and similar for the other counts. For efficiency reasons, the counts of the form yij∗, yi∗k and y∗jk are pre-computed for all word pairs, which can be done efficiently due to the sparsity of co-occurrence counts (i.e. these counts will be 0 for most pairs of words), similarly to how to the counts xij are computed in GloVe. From these counts, we can also efficiently pre-compute the counts yi∗∗, y∗j∗, y∗∗k and y∗∗∗. On the other hand, the counts yijk cannot be precomputed, since the total number of triples for which yijk ̸= 0 is prohibitively high in a typical corpus. However, using an inverted index, we can efficiently retrieve the sentences that contain the words i and k, and since this number of sentences is typically small, we can efficiently obtain the counts yijk corresponding to a given pair (i, k) whenever they are needed. 4.2 Relation Vectors Our aim is to learn a vector rik that models the relationship between i and k. Computing such a vector for each pair of words (which co-occur at least once) is not feasible, given the number of triples (i, j, k) that would need to be considered. Instead, we first learn a word embedding, by optimizing (2). Then, fixing the context vectors ˜ wj and bias terms bj, we learn a vector representation for a given pair (i, k) of interest by solving the following objective: X j∈Ji,k (rik· ˜ wj + ˜bj −SI(i, j, k))2 (3) where SI refers to one of SI1 S, SI2 S, SI3 S, SI4 S. Note that (3) is essentially the counterpart of (1), where we have replaced the role of the PMI measure by SI. In this way, we can exploit the representations of the context words from the word embedding model for learning relation vectors. Note that the 28 factor 1 σ2 j has been omitted. This is because words j that are normally relatively uninformative (e.g. stop words), for which σ2 j would be high, can actually be very important for characterizing the relationship between i and k. For instance, the phrase “X such as Y ” clearly suggests a hyponomy relationship between X and Y , but both ‘such’ and ‘as’ would be associated with a high residual variance σ2 j . The set Ji,k contains every j for which yijk > 0 as well as a random sample of m words for which yijk = 0, where m = 2 · |{j : yijk > 0|. Note that because ˜ wj is now fixed, (3) is a linear least squares regression problem, which can be solved exactly and efficiently. The vector rik is based on words that appear between i and k. In the same way, we can learn a vector sik based on the words that appear before i and a vector tik based on the words that appear after k, in sentences where i occurs before k. Furthermore, we also learn vectors rki, ski and tki from the sentences where k occurs before i. As the final representation Rik of the relationship between i and k, we concatenate the vectors rik, rki, sik, ski, tik, tki as well as the word vectors wi and wk. We write Rl ik to denote the vector that results from using measure SIl (l ∈{1, 2, 3, 4}). 5 Experimental Results In our experiments, we have used the Wikipedia dump from November 2nd, 2015, which consists of 1,335,766,618 tokens. We have removed punctuations and HTML/XML tags, and we have lowercased all tokens. Words with fewer than 10 occurrences have been removed from the corpus. To detect sentence boundaries, we have used the Apache sentence segmentation tool. In all our experiments, we have set the number of dimensions to 300, which was found to be a good choice in previous work, e.g. (Pennington et al., 2014). We use a context window size W of 10 words. The number of iterations for SGD was set to 50. For our model, we have tuned the smoothing parameter α based on held-out tuning data, considering values from {0.1, 0.01, 0.001, 0.0001, 0.00001, 0.000001}. We have noticed that in most of the cases the value of α was automatically selected as 0.00001. To efficiently compute the triples, we have used the Zettair3 retrieval engine. As our main baselines, we use three popular unsupervised methods for constructing relation vec3http://www.seg.rmit.edu.au/zettair/ Table 1: Results for the relation induction task. Google Analogy Diff Conc Avg R1 ik R2 ik R3 ik R4 ik Acc 90.0 89.0 89.9 90.0 92.3 90.9 90.4 Pre 81.6 78.7 80.8 79.9 87.1 83.2 81.1 Rec 82.6 83.9 83.9 86.0 84.8 84.8 85.5 F1 82.1 81.2 82.3 82.8 85.9 84.0 83.3 DiffVec Diff Conc Avg R1 ik R2 ik R3 ik R4 ik Acc 29.5 28.9 29.7 29.7 31.3 30.4 30.1 Pre 19.6 18.7 20.4 21.5 22.9 21.9 22.3 Rec 23.8 22.9 23.7 24.5 25.7 25.3 22.9 F1 21.5 20.6 21.9 22.4 24.2 23.5 22.6 tors. First, Diff uses the vector difference wk −wi, following the common strategy of modeling relations as vector differences, as e.g. in (Vylomova et al., 2016). Second, Conc uses the concatenation of wi and wk. This model is more general than Diff but it uses twice as many dimensions, which may make it harder to learn a good classifier from few examples. The use of concatenations is popular e.g. in the context of hypernym detection (Baroni et al., 2012). Finally, Avg averages the vector representations of the words occurring in sentences that Diff, contain i and k. In particular, let ravg ik be obtained by averaging the word vectors of the context words appearing between i and k for each sentence containing i and k (in that order), and then averaging the vectors obtained from each of these sentences. Let savg ik and tavg ik be similarly obtained from the words occurring before i and the words occurring after k respectively. The considered relation vector is then defined as the concatenation of ravg ik , ravg ki , savg ik , savg ki , tavg ik , tavg ki , wi and wk. The Avg will allow us to directly compare how much we can improve relation vectors by deviating from the common strategy of averaging word vectors. 5.1 Relation Induction In the relation induction task, we are given word pairs (s1, t1), ..., (sk, tk) that are related in some way, and the task is to decide for a number of test examples (s, t) whether they also have this relationship. Among others, this task was considered in (Vylomova et al., 2016), and a ranking version of this task was studied in (Drozd et al., 2016). As test sets we use the Google Analogy Test Set (Mikolov et al., 2013a), which contains instances of 14 different types of relations, and the DiffVec dataset, which was introduced in (Vylomova et al., 2016). This dataset contains instances of 36 dif29 Table 2: Results for the relation induction task using alternative word embedding models. GloVe SkipGram CBOW Google DiffVec Google DiffVec Google DiffVec Acc F1 Acc F1 Acc F1 Acc F1 Acc F1 Acc F1 Diff 90.0 81.9 21.2 13.9 89.8 81.9 21.7 14.5 89.9 82.1 17.4 9.7 Conc 88.9 80.4 20.2 11.9 89.2 81.6 20.5 12.0 89.1 81.1 16.4 7.7 Avg 89.8 82.1 21.4 13.9 90.2 82.4 21.8 14.4 89.8 82.2 17.5 10.0 R1 ik 89.7 81.7 20.9 12.5 89.4 81.2 21.1 12.3 89.8 81.9 17.2 9.2 R2 ik 90.0 82.8 21.2 13.4 89.1 81.3 21.1 12.9 90.2 82.4 17.7 10.0 R3 ik 90.0 82.3 20.0 11.2 89.5 81.1 20.5 12.3 89.5 81.1 17.2 9.6 R4 ik 90.0 82.5 20.0 11.4 88.9 80.8 20.6 12.1 90.5 82.2 17.1 8.4 ferent types of relations4. Note that both datasets contain a mix of semantic and syntactic relations. In our evaluation, we have used 10-fold crossvalidation (or leave-one-out for relations with fewer than 10 instances). In the experiments, we consider for each relation in the test set a separate binary classification task, which was found to be considerably more challenging than a multi-class classification setting in (Vylomova et al., 2016). To generate negative examples in the training data (resp. test data), we have used three strategies, following (Vylomova et al., 2016). First, for a given positive example (s, t) of the considered relation, we add (t, s) as a negative example. Second, for each positive example (s, t), we generate two negative examples (s, t1) and (s, t2) by randomly selecting two tail words t1, t2 from the other training (resp. test) examples of the same relation. Finally, for each positive example, we also generate a negative example by randomly selecting two words from the vocabulary. For each relation, we then train a linear SVM classifier. To set the parameters of the SVM, we initially use 25% of the training data for tuning, and then retrain the SVM with the optimal parameters on the full training data. The results are summarized in Table 1 in terms of accuracy and (macro-averaged) precision, recall and F1 score. As can be observed, our model outperforms the baselines on both datasets, with the R2 ik variant outperforming the others. To analyze the benefit of our proposed word embedding variant, Table 2 shows the results that were obtained when we use standard word embedding models. In particular, we show results for the standard GloVe model, SkipGram and the Continuous Bag of Words (CBOW) model. As can be observed, our variant leads to better results than the original GloVe model, even for the baselines. 4Note that in contrast to (Vylomova et al., 2016) we use all 36 relations from this dataset, including those with very few instances. Table 3: Relation induction without position weighting (left) and without the relation vectors sik and tik (right). Google DiffVec Acc F1 Acc F1 R1 ik 89.7 82.4 30.2 22.2 R2 ik 91.0 83.4 30.8 24.1 R3 ik 90.4 83.2 30.1 22.3 R4 ik 90.2 82.9 29.1 21.2 Google DiffVec Acc F1 Acc F1 R1 ik 90.0 82.5 29.9 22.3 R2 ik 92.3 85.8 31.2 24.2 R3 ik 90.5 83.2 30.2 23.0 R4 ik 90.3 83.1 29.8 22.3 The difference is particularly noticeable for DiffVec. The difference is also larger for our relation vectors than for the baselines, which is expected as our method is based on the assumption that context word vectors can be interpreted in terms of PMI scores, which is only true for our variant. Similar as in the GloVe model, the context words in our model are weighted based on their distance to the nearest target word. Table 3 shows the results of our model without this weighting, for the relation induction task. Comparing these results with those in Table 1 shows that the weighting scheme indeed leads to a small improvement (except for the accuracy of R1 ik for DiffVec). Similarly, in Table 3, we show what happens if the relation vectors sik, ski, tik and tki are omitted. In other words, for the results in Table 3, we only use context words that appear between the two target words. Again, the results are worse than those in Table 1 (with the accuracy of R1 ik for DiffVec again being an exception), although the differences are very small in this case. While including the vectors sik, ski, tik, tki should be helpful, it also significantly increases the dimensionality of the vectors Rl ik. Given that the number of instances per relation is typically quite small for this 30 Table 4: Results for measuring degrees of prototypicality (Spearman ρ × 100). Diff Conc Avg R1 ik R2 ik R3 ik R4 ik 17.3 16.7 21.1 22.7 23.9 21.8 22.2 task, this can also make it harder to learn a suitable classifier. 5.2 Measuring Degrees of Prototypicality Instances of relations can often have different degrees of prototypicality. For example, for the relation “X characteristically makes the sound Y ”, the pair (dog,bark) should be considered more prototypical than the pair (floor,squeak), even though both pairs might be considered to be instances of the relation (Jurgens et al., 2012). A suitable relation vector should allow us to rank word pairs according to how prototypical they are as instances of that relation. We evaluate this ability using a dataset that was produced in the aftermath of SemEval 2012 Task 2. In particular, we have used the “Phase2AnswerScaled” data from the platinum rankings dataset, which is available from the SemEval 2012 Task 2 website5. In this dataset, 79 ranked list of word pairs are provided, each of which corresponds to a particular relation. For each relation, we first split the associated ranking into 60% training, 20% tuning, and 20% testing (i.e. we randomly select 60% of the word pairs and use their ranking as training data, and similar for tuning and test data). We then train a linear SVM regression model on the ranked word pairs. Note that this task slightly differs from the task that was considered at SemEval 2012, to allow us to use an SVM based model for consistency with the rest of the paper. We report results using Spearman’s ρ in Table 4. Our model again outperforms the baselines, with R2 ik again being the best variant. Interestingly, in this case, the Avg baseline is considerably stronger than Diff and Conc. Intuitively, we might indeed expect that this ranking problem requires a more fine-grained representation than the relation induction setting. Note that the Diff representations were found to achieve near state-of-theart performance on a closely related task in (Zhila et al., 2013). The only model that was found to perform (slightly) better was a hybrid model, combining Diff representations with linguistic patterns 5https://sites.google.com/site/semeval2012task2/download 0 0.1 0.2 0.3 0.4 0 0.2 0.4 0.6 0.8 1 Recall Precision R1 ik R2 ik R2 ik(Quadratic) R3 ik R4 ik Avg Diff Conc Figure 1: Results for the relation extraction from the NYT corpus: comparison with the main baselines. (inspired by (Rink and Harabagiu, 2012)) and lexical databases, among others. 5.3 Relation Extraction Finally, we consider the problem of relation extraction from a text corpus. Specifically, we consider the task proposed in (Riedel et al., 2010), which is to extract (subject,predicate,object) triples from the New York Times (NYT) corpus. Rather than having labelled sentences as training data, the task requires using the existing triples from Freebase as a form of distant supervision, i.e. for some pairs of entities we know some of the relations that hold between them, but not which sentences assert these relationships (if any). To be consistent with published results for this task, we have used a word embedding that was trained from the NYT corpus6, rather than Wikipedia (using the same preprocessing and set-up). We have used the training and test data that was shared publicly for this task7, which consist of sentences from articles published in 2005-2006 and in 2007, respectively. Each of these sentences contains two entities, which are already linked to Freebase. We learn relation vectors from the sentences in the training and test sets, and learn a linear SVM classifier based on the Freebase triples that are available in the training set. Initially, we split the training data into 75% training and 25% tuning to find the optimal parameters of the linear SVM model. We tuned the parameters for each test fold sepa6https://catalog.ldc.upenn.edu/LDC2008T19 7http://iesl.cs.umass.edu/riedel/ecml/ 31 0 0.1 0.2 0.3 0.4 0 0.2 0.4 0.6 0.8 1 Recall Precision R2 ik(Quadratic) CNN+ATT Hoffmann PCNN+ATT MIMLRE Mintz Figure 2: Results for the relation extraction from the NYT corpus: comparison with state-of-the-art neural network models. rately. For each test fold, we used 25% of the 9 training folds as tuning data. After the optimal parameters have been determined, we retrain the model on the full training data, and apply it on the test fold. We used this approach (rather than e.g. fixing a train/tune/test split) because the total number of examples for some of the relations is very small. After tuning, we re-train the SVM models on the full training data. As the number of training examples is larger for this task, we also consider SVMs with a quadratic kernel. Following earlier work on this task, we report our results on the test set as a precisionrecall graph in Figure 1. This shows that the best performance is again achieved by R2 ik, especially for larger recall values. Furthermore, using a quadratic kernel (only shown for R2 ik) outperforms the linear SVM models. Note that the differences between the baselines are more pronounced in this task, with Avg being clearly better than Diff, which is in turn better than Conc. For this relation extraction task, a large number of methods have already been proposed in the literature, with variants of convolutional neural network models with attention mechanisms achieving state-of-the-art performance8. A comparison with these models9 is shown in Figure 2. The performance of R2 ik is comparable with the state-of8Note that such models would not be suitable for the evaluation tasks in Sections 5.1 and 5.2, due to the very limited number of training examples. 9Results for the neural network models have been obtained from https://github.com/thunlp/ TensorFlow-NRE/tree/master/data. the-art PCNN+ATT model (Lin et al., 2016), outperforming it for larger recall values. This is remarkable, as our model is conceptually much simpler, and has not been specifically tuned for this task. For instance, it could easily be improved by incorporating the attention mechanism from the PCNN+ATT model to focus the relation vectors on the considered task. Similarly, we could consider a supervised variant of (3), in which a learned relation-specific weight is added to each term. 6 Conclusions We have proposed an unsupervised method which uses co-occurrences statistics to represent the relationship between a given pair of words as a vector. In contrast to neural network models for relation extraction, our model learns relation vectors in an unsupervised way, which means that it can be used for measuring relational similarities and related tasks. Moreover, even in (distantly) supervised tasks (where we need to learn a classifier on top of the unsupervised relation vectors), our model has proven competitive with state-of-the-art neural network models. Compared to approaches that rely on averaging word vectors, our method is able to learn more faithful representations by focusing on the words that are most strongly related to the considered relationship. Acknowledgments This work was supported by ERC Starting Grant 637277. Experiments in this work were performed using the computational facilities of the Advanced Research Computing at Cardiff (ARCCA) Division, Cardiff University and the ICARUS computational facility from Information Services, at the University of Kent. References Eugene Agichtein and Luis Gravano. 2000. Snowball: Extracting relations from large plain-text collections. In Proceedings of the Fifth ACM Conference on Digital libraries. pages 85–94. Michele Banko, Michael J Cafarella, Stephen Soderland, Matthew Broadhead, and Oren Etzioni. 2007. Open information extraction from the web. In Proc. IJCAI. pages 2670–2676. Marco Baroni, Raffaella Bernardi, Ngoc-Quynh Do, and Chung-chieh Shan. 2012. Entailment above the word level in distributional semantics. In Proc. EACL. pages 23–32. 32 A. Bordes, J. Weston, R. Collobert, and Y. Bengio. 2011. Learning structured embeddings of knowledge bases. In AAAI. Sergey Brin. 1998. Extracting patterns and relations from the world wide web. In International Workshop on The World Wide Web and Databases. pages 172–183. Andrew Carlson, Justin Betteridge, Bryan Kisiel, Burr Settles, Estevam .R. Hruschka Jr., and Tom M. Mitchell. 2010. Toward an architecture for neverending language learning. In Proc. AAAI. pages 1306–1313. George R. Doddington, Alexis Mitchell, Mark A. Przybocki, Lance A. Ramshaw, Stephanie Strassel, and Ralph M. Weischedel. 2004. The automatic content extraction (ACE) program - tasks, data, and evaluation. In Proc. LREC. C´ıcero Nogueira dos Santos, Bing Xiang, and Bowen Zhou. 2015. Classifying relations by ranking with convolutional neural networks. In Proc. ACL. pages 626–634. Aleksandr Drozd, Anna Gladkova, and Satoshi Matsuoka. 2016. Word embeddings, analogies, and machine learning: Beyond king - man + woman = queen. In Proc. COLING. pages 3519–3530. Miao Fan, Kai Cao, Yifan He, and Ralph Grishman. 2015. Jointly embedding relations and mentions for knowledge population. In Proc. RANLP. pages 186– 191. Kazuma Hashimoto, Pontus Stenetorp, Makoto Miwa, and Yoshimasa Tsuruoka. 2015. Task-oriented learning of word embeddings for semantic relation classification. In Proc. CoNLL. pages 268–278. Iris Hendrickx, Su Nam Kim, Zornitsa Kozareva, Preslav Nakov, Diarmuid ´O S´eaghdha, Sebastian Pad´o, Marco Pennacchiotti, Lorenza Romano, and Stan Szpakowicz. 2010. Semeval-2010 task 8: Multi-way classification of semantic relations between pairs of nominals. In Proc. SemEval. pages 33–38. Felix Hill, Kyunghyun Cho, and Anna Korhonen. 2016. Learning distributed representations of sentences from unlabelled data. In Proc. NAACL-HLT. pages 1367–1377. Nitin Indurkhya and Fred J Damerau. 2010. Handbook of natural language processing, volume 2. CRC Press. Shoaib Jameel and Steven Schockaert. 2016. D-GloVe: A feasible least squares model for estimating word embedding densities. In Proc. COLING. pages 1849–1860. David A Jurgens, Peter D Turney, Saif M Mohammad, and Keith J Holyoak. 2012. Semeval-2012 task 2: Measuring degrees of relational similarity. In Proc. *SEM. pages 356–364. Tom Kenter, Alexey Borisov, and Maarten de Rijke. 2016. Siamese CBOW: optimizing word embeddings for sentence representations. In Proc. ACL. Quoc Le and Tomas Mikolov. 2014. Distributed representations of sentences and documents. In Proc. ICML. pages 1188–1196. Yankai Lin, Shiqi Shen, Zhiyuan Liu, Huanbo Luan, and Maosong Sun. 2016. Neural relation extraction with selective attention over instances. In Proc. ACL. Tomas Mikolov, Kai Chen, Greg Corrado, and Jeffrey Dean. 2013a. Efficient estimation of word representations in vector space. In Proc. ICLR. Tomas Mikolov, Ilya Sutskever, Kai Chen, Gregory S. Corrado, and Jeffrey Dean. 2013b. Distributed representations of words and phrases and their compositionality. In Proc. NIPS. pages 3111–3119. Mike Mintz, Steven Bills, Rion Snow, and Dan Jurafsky. 2009. Distant supervision for relation extraction without labeled data. In Proc. ACL. pages 1003–1011. Jeff Mitchell and Mirella Lapata. 2010. Composition in distributional models of semantics. Cognitive Science 34(8):1388–1429. Jeffrey Pennington, Richard Socher, and Christopher D. Manning. 2014. GloVe: Global vectors for word representation. In Proc. EMNLP. pages 1532– 1543. Sebastian Riedel, Limin Yao, and Andrew McCallum. 2010. Modeling relations and their mentions without labeled text. In Proc. ECML/PKDD. pages 148– 163. Bryan Rink and Sanda Harabagiu. 2012. UTD: Determining relational similarity using lexical patterns. In Proceedings of the First Joint Conference on Lexical and Computational Semantics. pages 413–418. Yusuke Shinyama and Satoshi Sekine. 2006. Preemptive information extraction using unrestricted relation discovery. In Proc. NAACL-HLT. pages 304– 311. Richard Socher, Brody Huval, Christopher D Manning, and Andrew Y Ng. 2012. Semantic compositionality through recursive matrix-vector spaces. In Proc. EMNLP. pages 1201–1211. Tim Van de Cruys. 2011. Two multivariate generalizations of pointwise mutual information. In Proceedings of the Workshop on Distributional Semantics and Compositionality. pages 16–20. Ekaterina Vylomova, Laura Rimell, Trevor Cohn, and Timothy Baldwin. 2016. Take and took, gaggle and goose, book and read: Evaluating the utility of vector differences for lexical relation learning. In Proc. ACL. 33 Z. Wang, J. Zhang, J. Feng, and Z. Chen. 2014a. Knowledge graph and text jointly embedding. In EMNLP. pages 1591–1601. Zhen Wang, Jianwen Zhang, Jianlin Feng, and Zheng Chen. 2014b. Knowledge graph embedding by translating on hyperplanes. In AAAI. pages 1112– 1119. Jason Weston, Antoine Bordes, Oksana Yakhnenko, and Nicolas Usunier. 2013. Connecting language and knowledge bases with embedding models for relation extraction. In Proc. EMNLP. pages 1366– 1371. Yan Xu, Lili Mou, Ge Li, Yunchuan Chen, Hao Peng, and Zhi Jin. 2015. Classifying relations via long short term memory networks along shortest dependency paths. In Proc. EMNLP. pages 1785–1794. Daojian Zeng, Kang Liu, Siwei Lai, Guangyou Zhou, Jun Zhao, et al. 2014. Relation classification via convolutional deep neural network. In Proc. COLING. pages 2335–2344. Alisa Zhila, Wen-tau Yih, Christopher Meek, Geoffrey Zweig, and Tomas Mikolov. 2013. Combining heterogeneous models for measuring relational similarity. In Proceedings of the 2013 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies. pages 1000–1009.
2018
3
Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Long Papers), pages 317–327 Melbourne, Australia, July 15 - 20, 2018. c⃝2018 Association for Computational Linguistics 317 Sentence-State LSTM for Text Representation Yue Zhang1, Qi Liu1 and Linfeng Song2 1Singapore University of Technology and Design 2Department of Computer Science, University of Rochester {yue zhang, qi liu}@sutd.edu.sg, [email protected] Abstract Bi-directional LSTMs are a powerful tool for text representation. On the other hand, they have been shown to suffer various limitations due to their sequential nature. We investigate an alternative LSTM structure for encoding text, which consists of a parallel state for each word. Recurrent steps are used to perform local and global information exchange between words simultaneously, rather than incremental reading of a sequence of words. Results on various classification and sequence labelling benchmarks show that the proposed model has strong representation power, giving highly competitive performances compared to stacked BiLSTM models with similar parameter numbers. 1 Introduction Neural models have become the dominant approach in the NLP literature. Compared to handcrafted indicator features, neural sentence representations are less sparse, and more flexible in encoding intricate syntactic and semantic information. Among various neural networks for encoding sentences, bi-directional LSTMs (BiLSTM) (Hochreiter and Schmidhuber, 1997) have been a dominant method, giving state-of-the-art results in language modelling (Sundermeyer et al., 2012), machine translation (Bahdanau et al., 2015), syntactic parsing (Dozat and Manning, 2017) and question answering (Tan et al., 2015). Despite their success, BiLSTMs have been shown to suffer several limitations. For example, their inherently sequential nature endows computation non-parallel within the same sentence (Vaswani et al., 2017), which can lead to a computational bottleneck, hindering their use in the in... ... ... ... ... ... ... ... time 0 1 ... t-1 t Figure 1: Sentence-State LSTM dustry. In addition, local ngrams, which have been shown a highly useful source of contextual information for NLP, are not explicitly modelled (Wang et al., 2016). Finally, sequential information flow leads to relatively weaker power in capturing longrange dependencies, which results in lower performance in encoding longer sentences (Koehn and Knowles, 2017). We investigate an alternative recurrent neural network structure for addressing these issues. As shown in Figure 1, the main idea is to model the hidden states of all words simultaneously at each recurrent step, rather than one word at a time. In particular, we view the whole sentence as a single state, which consists of sub-states for individual words and an overall sentence-level state. To capture local and non-local contexts, states are updated recurrently by exchanging information between each other. Consequently, we refer to our model as sentence-state LSTM, or S-LSTM in short. Empirically, S-LSTM can give effective sentence encoding after 3 – 6 recurrent steps. In contrast, the number of recurrent steps necessary for BiLSTM scales with the size of the sentence. 318 At each recurrent step, information exchange is conducted between consecutive words in the sentence, and between the sentence-level state and each word. In particular, each word receives information from its predecessor and successor simultaneously. From an initial state without information exchange, each word-level state can obtain 3-gram, 5-gram and 7-gram information after 1, 2 and 3 recurrent steps, respectively. Being connected with every word, the sentence-level state vector serves to exchange non-local information with each word. In addition, it can also be used as a global sentence-level representation for classification tasks. Results on both classification and sequence labelling show that S-LSTM gives better accuracies compared to BiLSTM using the same number of parameters, while being faster. We release our code and models at https://github.com/ leuchine/S-LSTM, which include all baselines and the final model. 2 Related Work LSTM (Graves and Schmidhuber, 2005) showed its early potentials in NLP when a neural machine translation system that leverages LSTM source encoding gave highly competitive results compared to the best SMT models (Bahdanau et al., 2015). LSTM encoders have since been explored for other tasks, including syntactic parsing (Dyer et al., 2015), text classification (Yang et al., 2016) and machine reading (Hermann et al., 2015). Bidirectional extensions have become a standard configuration for achieving state-of-the-art accuracies among various tasks (Wen et al., 2015; Ma and Hovy, 2016; Dozat and Manning, 2017). SLSTMs are similar to BiLSTMs in their recurrent bi-directional message flow between words, but different in the design of state transition. CNNs (Krizhevsky et al., 2012) also allow better parallelisation compared to LSTMs for sentence encoding (Kim, 2014), thanks to parallelism among convolution filters. On the other hand, convolution features embody only fix-sized local ngram information, whereas sentence-level feature aggregation via pooling can lead to loss of information (Sabour et al., 2017). In contrast, S-LSTM uses a global sentence-level node to assemble and back-distribute local information in the recurrent state transition process, suffering less information loss compared to pooling. Attention (Bahdanau et al., 2015) has recently been explored as a standalone method for sentence encoding, giving competitive results compared to Bi-LSTM encoders for neural machine translation (Vaswani et al., 2017). The attention mechanism allows parallelisation, and can play a similar role to the sentence-level state in S-LSTMs, which uses neural gates to integrate word-level information compared to hierarchical attention. S-LSTM further allows local communication between neighbouring words. Hierarchical stacking of CNN layers (LeCun et al., 1995; Kalchbrenner et al., 2014; Papandreou et al., 2015; Dauphin et al., 2017) allows better interaction between non-local components in a sentence via incremental levels of abstraction. S-LSTM is similar to hierarchical attention and stacked CNN in this respect, incrementally refining sentence representations. However, S-LSTM models hierarchical encoding of sentence structure as a recurrent state transition process. In nature, our work belongs to the family of LSTM sentence representations. S-LSTM is inspired by message passing over graphs (Murphy et al., 1999; Scarselli et al., 2009). Graph-structure neural models have been used for computer program verification (Li et al., 2016) and image object detection (Liang et al., 2016). The closest previous work in NLP includes the use of convolutional neural networks (Bastings et al., 2017; Marcheggiani and Titov, 2017) and DAG LSTMs (Peng et al., 2017) for modelling syntactic structures. Compared to our work, their motivations and network structures are highly different. In particular, the DAG LSTM of Peng et al. (2017) is a natural extension of tree LSTM (Tai et al., 2015), and is sequential rather than parallel in nature. To our knowledge, we are the first to investigate a graph RNN for encoding sentences, proposing parallel graph states for integrating word-level and sentence-level information. In this perspective, our contribution is similar to that of Kim (2014) and Bahdanau et al. (2015) in introducing a neural representation to the NLP literature. 3 Model Given a sentence s = w1, w2, . . . , wn, where wi represents the ith word and n is the sentence length, our goal is to find a neural representation of s, which consists of a hidden vector hi for each input word wi, and a global sentence-level hid319 den vector g. Here hi represents syntactic and semantic features for wi under the sentential context, while g represents features for the whole sentence. Following previous work, we additionally add ⟨s⟩ and ⟨/s⟩to the two ends of the sentence as w0 and wn+1, respectively. 3.1 Baseline BiLSTM The baseline BiLSTM model consists of two LSTM components, which process the input in the forward left-to-right and the backward rightto-left directions, respectively. In each direction, the reading of input words is modelled as a recurrent process with a single hidden state. Given an initial value, the state changes its value recurrently, each time consuming an incoming word. Take the forward LSTM component for example. Denoting the initial state as → h 0, which is a model parameter, the recurrent state transition step for calculating → h 1, . . . , → h n+1 is defined as follows (Graves and Schmidhuber, 2005): ˆit = σ(Wixt + Ui → h t−1 + bi) ˆ f t = σ(Wfxt + Uf → h t−1 + bf) ot = σ(Woxt + Uo → h t−1 + bo) ut = tanh(Wuxt + Uu → h t−1 + bu) it, f t = softmax(ˆit, ˆ f t) ct = ct−1 ⊙f t + ut ⊙it → h t = ot ⊙tanh(ct) (1) where xt denotes the word representation of wt; it, ot, f t and ut represent the values of an input gate, an output gate, a forget gate and an actual input at time step t, respectively, which controls the information flow for a recurrent cell → c t and the state vector → h t; Wx, Ux and bx (x ∈{i, o, f, u}) are model parameters. σ is the sigmoid function. The backward LSTM component follows the same recurrent state transition process as described in Eq 1. Starting from an initial state hn+1, which is a model parameter, it reads the input xn, xn−1, . . . , x0, changing its value to ← h n, ← h n−1, . . . , ← h 0, respectively. A separate set of parameters ˆ Wx, ˆUx and ˆbx (x ∈{i, o, f, u}) are used for the backward component. The BiLSTM model uses the concatenated value of → h t and ← h t as the hidden vector for wt: ht = [→ h t; ← h t] A single hidden vector representation g of the whole input sentence can be obtained using the final state values of the two LSTM components: g = [→ h n+1; ← h 0] Stacked BiLSTM Multiple layers of BiLTMs can be stacked for increased representation power, where the hidden vectors of a lower layer are used as inputs for an upper layer. Different model parameters are used in each stacked BiLSTM layer. 3.2 Sentence-State LSTM Formally, an S-LSTM state at time step t can be denoted by: Ht = ⟨ht 0, ht 1, . . . , ht n+1, gt⟩, which consists of a sub state ht i for each word wi and a sentence-level sub state gt. S-LSTM uses a recurrent state transition process to model information exchange between sub states, which enriches state representations incrementally. For the initial state H0, we set h0 i = g0 = h0, where h0 is a parameter. The state transition from Ht−1 to Ht consists of sub state transitions from ht−1 i to ht i and from gt−1 to gt. We take an LSTM structure similar to the baseline BiLSTM for modelling state transition, using a recurrent cell ct i for each wi and a cell ct g for g. As shown in Figure 1, the value of each ht i is computed based on the values of xi, ht−1 i−1, ht−1 i , ht−1 i+1 and gt−1, together with their corresponding cell values: ξt i = [ht−1 i−1, ht−1 i , ht−1 i+1] ˆit i = σ(Wiξt i + Uixi + Vigt−1 + bi) ˆlt i = σ(Wlξt i + Ulxi + Vlgt−1 + bl) ˆrt i = σ(Wrξt i + Urxi + Vrgt−1 + br) ˆ f t i = σ(Wfξt i + Ufxi + Vfgt−1 + bf) ˆst i = σ(Wsξt i + Usxi + Vsgt−1 + bs) ot i = σ(Woξt i + Uoxi + Vogt−1 + bo) ut i = tanh(Wuξt i + Uuxi + Vugt−1 + bu) it i, lt i, rt i, f t i , st i = softmax(ˆit i, ˆlt i, ˆrt i, ˆ f t i , ˆst i) ct i = lt i ⊙ct−1 i−1 + f t i ⊙ct−1 i + rt i ⊙ct−1 i+1 + st i ⊙ct−1 g + it i ⊙ut i ht i = oi t ⊙tanh(ct i) (2) where ξt i is the concatenation of hidden vectors of a context window, and lt i, rt i, f t i , st i and it i are 320 gates that control information flow from ξt i and xi to ct i. In particular, it i controls information from the input xi; lt i, rt i, f t i and st i control information from the left context cell ct−1 i−1, the right context cell ct−1 i+1, ct−1 i and the sentence context cell ct−1 g , respectively. The values of it i, lt i, rt i, f t i and st i are normalised such that they sum to 1. ot i is an output gate from the cell state ct i to the hidden state ht i. Wx, Ux, Vx and bx (x ∈{i, o, l, r, f, s, u}) are model parameters. σ is the sigmoid function. The value of gt is computed based on the values of ht−1 i for all i ∈[0..n + 1]: ¯h = avg(ht−1 0 , ht−1 1 , . . . , ht−1 n+1) ˆ f t g = σ(Wggt−1 + Ug¯h + bg) ˆ f t i = σ(Wfgt−1 + Ufht−1 i + bf) ot = σ(Wogt−1 + Uo¯h + bo) f t 0, . . . , f t n+1, f t g = softmax( ˆ f t 0, . . . , ˆ f t n+1, ˆ f t g) ct g = f t g ⊙ct−1 g + X i f t i ⊙ct−1 i gt = ot ⊙tanh(ct g) (3) where f t 0, . . . , f t n+1 and f t g are gates controlling information from ct−1 0 , . . . , ct−1 n+1 and ct−1 g , respectively, which are normalised. ot is an output gate from the recurrent cell ct g to gt. Wx, Ux and bx (x ∈{g, f, o}) are model parameters. Contrast with BiLSTM The difference between S-LSTM and BiLSTM can be understood with respect to their recurrent states. While BiLSTM uses only one state in each direction to represent the subsequence from the beginning to a certain word, S-LSTM uses a structural state to represent the full sentence, which consists of a sentence-level sub state and n + 2 word-level sub states, simultaneously. Different from BiLSTMs, for which ht at different time steps are used to represent w0, . . . , wn+1, respectively, the word-level states ht i and sentence-level state gt of S-LSTMs directly correspond to the goal outputs hi and g, as introduced in the beginning of this section. As t increases from 0, ht i and gt are enriched with increasingly deeper context information. From the perspective of information flow, BiLSTM passes information from one end of the sentence to the other. As a result, the number of time steps scales with the size of the input. In contrast, S-LSTM allows bi-directional information flow at each word simultaneously, and additionally between the sentence-level state and every wordlevel state. At each step, each hi captures an increasing larger ngram context, while additionally communicating globally to all other hj via g. The optimal number of recurrent steps is decided by the end-task performance, and does not necessarily scale with the sentence size. As a result, SLSTM can potentially be both more efficient and more accurate compared with BiLSTMs. Increasing window size. By default S-LSTM exchanges information only between neighbouring words, which can be seen as adopting a 1word window on each side. The window size can be extended to 2, 3 or more words in order to allow more communication in a state transition, expediting information exchange. To this end, we modify Eq 2, integrating additional context words to ξt i, with extended gates and cells. For example, with a window size of 2, ξt i = [ht−1 i−2, ht−1 i−1, ht−1 i , ht−1 i+1, ht−1 i+2]. We study the effectiveness of window size in our experiments. Additional sentence-level nodes. By default S-LSTM uses one sentence-level node. One way of enriching the parameter space is to add more sentence-level nodes, each communicating with word-level nodes in the same way as described by Eq 3. In addition, different sentence-level nodes can communicate with each other during state transition. When one sentence-level node is used for classification outputs, the other sentencelevel node can serve as hidden memory units, or latent features. We study the effectiveness of multiple sentence-level nodes empirically. 3.3 Task settings We consider two task settings, namely classification and sequence labelling. For classification, g is fed to a softmax classification layer: y = softmax(Wcg + bc) where y is the probability distribution of output class labels and Wc and bc are model parameters. For sequence labelling, each hi can be used as feature representation for a corresponding word wi. External attention It has been shown that summation of hidden states using attention (Bahdanau et al., 2015; Yang et al., 2016) give better accuracies compared to using the end states of BiLSTMs. We study the influence of attention on both S-LSTM and BiLSTM for classification. In particular, additive attention (Bahdanau 321 Dataset Training Development Test #sent #words #sent #words #sent #words Movie review (Pang and Lee, 2008) 8527 201137 1066 25026 1066 25260 Books 1400 297K 200 59K 400 68K Electronics 1398 924K 200 184K 400 224K DVD 1400 1,587K 200 317K 400 404K Kitchen 1400 769K 200 153K 400 195K Apparel 1400 525K 200 105K 400 128K Camera 1397 1,084K 200 216K 400 260K Text Health 1400 742K 200 148K 400 175K Classification Music 1400 1,176K 200 235K 400 276K (Liu et al., 2017) Toys 1400 792K 200 158K 400 196K Video 1400 1,311K 200 262K 400 342K Baby 1300 855K 200 171K 400 221K Magazines 1370 1,033K 200 206K 400 264K Software 1315 1,143K 200 228K 400 271K Sports 1400 833K 200 183K 400 218K IMDB 1400 2,205K 200 507K 400 475K MR 1400 196K 200 41K 400 48K POS tagging (Marcus et al., 1993) 39831 950011 1699 40068 2415 56671 NER (Sang et al., 2003) 14987 204567 3466 51578 3684 46666 Table 1: Dataset statistics et al., 2015) is applied to the hidden states of input words for both BiLSTMs and S-LSTMs calculating a weighted sum g = X t αtht where αt = exp uT ϵt P i exp uT ϵi ϵt = tanh(Wαht + bα) Here Wα, u and bα are model parameters. External CRF For sequential labelling, we use a CRF layer on top of the hidden vectors h1, h2, . . . , hn for calculating the conditional probabilities of label sequences (Huang et al., 2015; Ma and Hovy, 2016): P(Y n 1 |h, Ws, bs) = Qn i=1 ψi(yi−1, yi, h) P Y n′ 1 Qn i=1 ψi(y′ i−1, y′ i, h) ψi(yi−1, yi, h) = exp(W yi−1,yi s hi + byi−1,yi s ) where W yi−1,yi s and byi−1,yi s are parameters specific to two consecutive labels yi−1 and yi. For training, standard log-likelihood loss is used with L2 regularization given a set of gold-standard instances. 4 Experiments We empirically compare S-LSTMs and BiLSTMs on different classification and sequence labelling tasks. All experiments are conducted using a GeForce GTX 1080 GPU with 8GB memory. Model Time (s) Acc # Param +0 dummy node 56 81.76 7,216K +1 dummy node 65 82.64 8,768K +2 dummy node 76 82.24 10,321K Hidden size 100 42 81.75 4,891K Hidden size 200 54 82.04 6,002K Hidden size 300 65 82.64 8,768K Hidden size 600 175 81.84 17,648K Hidden size 900 235 81.66 33,942K Without ⟨s⟩, ⟨/s⟩ 63 82.36 8,768K With ⟨s⟩, ⟨/s⟩ 65 82.64 8,768K Table 2: Movie review DEV results of S-LSTM 4.1 Experimental Settings Datasets. We choose the movie review dataset of Pang and Lee (2008), and additionally the 16 datasets of Liu et al. (2017) for classification evaluation. We randomly split the movie review dataset into training (80%), development (10%) and test (10%) sections, and the original split of Liu et al. (2017) for the 16 classification datasets. For sequence labelling, we choose the Penn Treebank (Marcus et al., 1993) POS tagging task and the CoNLL (Sang et al., 2003) NER task as our benchmarks. For POS tagging, we follow the standard split (Manning, 2011), using sections 0 – 18 for training, 19 – 21 for development and 22 – 24 for test. For NER, we follow the standard split, and use the BIOES tagging scheme (Ratinov and Roth, 2009). Statistics of the four datasets are shown in Table 1. Hyperparameters. We initialise word embeddings using GloVe (Pennington et al., 2014) 300 dimensional embeddings.1 Embeddings are finetuned during model training for all tasks. Dropout (Srivastava et al., 2014) is applied to embedding hidden states, with a rate of 0.5. All models are optimised using the Adam optimizer (Kingma and Ba, 2014), with an initial learning rate of 0.001 and a decay rate of 0.97. Gradients are clipped at 3 and a batch size of 10 is adopted. Sentences with similar lengths are batched together. The L2 regularization parameter is set to 0.001. 4.2 Development Experiments We use the movie review development data to investigate different configurations of S-LSTMs and BiLSTMs. For S-LSTMs, the default configuration uses ⟨s⟩and ⟨/s⟩words for augmenting words 1https://nlp.stanford.edu/projects/glove/ 322 1 3 5 7 9 11 Time Step t 0.795 0.800 0.805 0.810 0.815 0.820 0.825 0.830 Accuracy window = 1 window = 2 window = 3 window = 4 Figure 2: Accuracies with various window sizes and time steps on movie review development set of a sentence. A hidden layer size of 300 and one sentence-level node are used. Hyperparameters: Table 2 shows the development results of various S-LSTM settings, where Time refers to training time per epoch. Without the sentence-level node, the accuracy of S-LSTM drops to 81.76%, demonstrating the necessity of global information exchange. Adding one additional sentence-level node as described in Section 3.2 does not lead to accuracy improvements, although the number of parameters and decoding time increase accordingly. As a result, we use only 1 sentence-level node for the remaining experiments. The accuracies of S-LSTM increases as the hidden layer size for each node increases from 100 to 300, but does not further increase when the size increases beyond 300. We fix the hidden size to 300 accordingly. Without using ⟨s⟩and ⟨/s⟩, the performance of S-LSTM drops from 82.64% to 82.36%, showing the effectiveness of having these additional nodes. Hyperparameters for BiLSTM models are also set according to the development data, which we omit here. State transition. In Table 2, the number of recurrent state transition steps of S-LSTM is decided according to the best development performance. Figure 2 draws the development accuracies of SLSTMs with various window sizes against the number of recurrent steps. As can be seen from the figure, when the number of time steps increases from 1 to 11, the accuracies generally increase, before reaching a maximum value. This shows the effectiveness of recurrent information exchange in S-LSTM state transition. On the other hand, no significant differences are observed on the peak accuracies given by different window sizes, although a larger window size (e.g. Model Time (s) Acc # Param LSTM 67 80.72 5,977K BiLSTM 106 81.73 7,059K 2 stacked BiLSTM 207 81.97 9,221K 3 stacked BiLSTM 310 81.53 11,383K 4 stacked BiLSTM 411 81.37 13,546K S-LSTM 65 82.64* 8,768K CNN 34 80.35 5,637K 2 stacked CNN 40 80.97 5,717K 3 stacked CNN 47 81.46 5,808K 4 stacked CNN 51 81.39 5,855K Transformer (N=6) 138 81.03 7,234K Transformer (N=8) 174 81.86 7,615K Transformer (N=10) 214 81.63 8,004K BiLSTM+Attention 126 82.37 7,419K S-LSTM+Attention 87 83.07* 8,858K Table 3: Movie review development results 4) generally results in faster plateauing. This can be be explained by the intuition that information exchange between distant nodes can be achieved using more recurrent steps under a smaller window size, as can be achieved using fewer steps under a larger window size. Considering efficiency, we choose a window size of 1 for the remaining experiments, setting the number of recurrent steps to 9 according to Figure 2. S-LSTM vs BiLSTM: As shown in Table 3, BiLSTM gives significantly better accuracies compared to uni-directional LSTM2, with the training time per epoch growing from 67 seconds to 106 seconds. Stacking 2 layers of BiLSTM gives further improvements to development results, with a larger time of 207 seconds. 3 layers of stacked BiLSTM does not further improve the results. In contrast, S-LSTM gives a development result of 82.64%, which is significantly better compared to 2-layer stacked BiLSTM, with a smaller number of model parameters and a shorter time of 65 seconds. We additionally make comparisons with stacked CNNs and hierarchical attention (Vaswani et al., 2017), shown in Table 3 (the CNN and Transformer rows), where N indicates the number of attention layers. CNN is the most efficient among all models compared, with the smallest model size. On the other hand, a 3-layer stacked CNN gives an accuracy of 81.46%, which is also 2p < 0.01 using t-test. For the remaining of this paper, we use the same measure for statistical significance. 323 Model Accuracy Train (s) Test (s) Socher et al. (2011) 77.70 – – Socher et al. (2012) 79.00 – – Kim (2014) 81.50 – – Qian et al. (2016) 81.50 – – BiLSTM 81.61 51 1.62 2 stacked BiLSTM 81.94 98 3.18 3 stacked BiLSTM 81.71 137 4.67 3 stacked CNN 81.59 31 1.04 Transformer (N=8) 81.97 89 2.75 S-LSTM 82.45* 41 1.53 Table 4: Test set results on movie review dataset (* denotes significance in all tables). the lowest compared with BiLSTM, hierarchical attention and S-LSTM. The best performance of hierarchical attention is between single-layer and two-layer BiLSTMs in terms of both accuracy and efficiency. S-LSTM gives significantly better accuracies compared with both CNN and hierarchical attention. Influence of external attention mechanism. Table 3 additionally shows the results of BiLSTM and S-LSTM when external attention is used as described in Section 3.3. Attention leads to improved accuracies for both BiLSTM and S-LSTM in classification, with S-LSTM still outperforming BiLSTM significantly. The result suggests that external techniques such as attention can play orthogonal roles compared with internal recurrent structures, therefore benefiting both BiLSTMs and S-LSTMs. Similar observations are found using external CRF layers for sequence labelling. 4.3 Final Results for Classification The final results on the movie review and rich text classification datasets are shown in Tables 4 and 5, respectively. In addition to training time per epoch, test times are additionally reported. We use the best settings on the movie review development dataset for both S-LSTMs and BiLSTMs. The step number for S-LSTMs is set to 9. As shown in Table 4, the final results on the movie review dataset are consistent with the development results, where S-LSTM outperforms BiLSTM significantly, with a faster speed. Observations on CNN and hierarchical attention are consistent with the development results. S-LSTM also gives highly competitive results when compared with existing methods in the literature. 1 3 5 7 9 11 S-LSTM Time Step 91.5 92.0 92.5 93.0 93.5 94.0 94.5 95.0 F1 (a) CoNLL03 1 3 5 7 9 11 S-LSTM Time Step 96.8 96.9 97.0 97.1 97.2 97.3 97.4 97.5 97.6 Accuracy (b) WSJ Figure 3: Sequence labelling development results. As shown in Table 5, among the 16 datasets of Liu et al. (2017), S-LSTM gives the best results on 12, compared with BiLSTM and 2 layered BiLSTM models. The average accuracy of S-LSTM is 85.6%, significantly higher compared with 84.9% by 2-layer stacked BiLSTM. 3-layer stacked BiLSTM gives an average accuracy of 84.57%, which is lower compared to a 2-layer stacked BiLSTM, with a training time per epoch of 423.6 seconds. The relative speed advantage of S-LSTM over BiLSTM is larger on the 16 datasets as compared to the movie review test test. This is because the average length of inputs is larger on the 16 datasets (see Section 4.5). 4.4 Final Results for Sequence Labelling Bi-directional RNN-CRF structures, and in particular BiLSTM-CRFs, have achieved the state of the art in the literature for sequence labelling tasks, including POS-tagging and NER. We compare SLSTM-CRF with BiLSTM-CRF for sequence labelling, using the same settings as decided on the movie review development experiments for both BiLSTMs and S-LSTMs. For the latter, we decide 324 Dataset SLSTM Time (s) BiLSTM Time (s) 2 BiLSTM Time (s) Camera 90.02* 50 (2.85) 87.05 115 (8.37) 88.07 221 (16.1) Video 86.75* 55 (3.95) 84.73 140 (12.59) 85.23 268 (25.86) Health 86.5 37 (2.17) 85.52 118 (6.38) 85.89 227 (11.16) Music 82.04* 52 (3.44) 78.74 185 (12.27) 80.45 268 (23.46) Kitchen 84.54* 40 (2.50) 82.22 118 (10.18) 83.77 225 (19.77) DVD 85.52* 63 (5.29) 83.71 166 (15.42) 84.77 217 (28.31) Toys 85.25 39 (2.42) 85.72 119 (7.58) 85.82 231 (14.83) Baby 86.25* 40 (2.63) 84.51 125 (8.50) 85.45 238 (17.73) Books 83.44* 64 (3.64) 82.12 240 (13.59) 82.77 458 (28.82) IMDB 87.15* 67 (3.69) 86.02 248 (13.33) 86.55 486 (26.22) MR 76.2 27 (1.25) 75.73 39 (2.27) 75.98 72 (4.63) Appeal 85.75 35 (2.83) 86.05 119 (11.98) 86.35* 229 (22.76) Magazines 93.75* 51 (2.93) 92.52 214 (11.06) 92.89 417 (22.77) Electronics 83.25* 47 (2.55) 82.51 195 (10.14) 82.33 356 (19.77) Sports 85.75* 44 (2.64) 84.04 172 (8.64) 84.78 328 (16.34) Software 87.75* 54 (2.98) 86.73 245 (12.38) 86.97 459 (24.68) Average 85.38* 47.30 (2.98) 84.01 153.48 (10.29) 84.64 282.24 (20.2) Table 5: Results on the 16 datasets of Liu et al. (2017). Time format: train (test) Model Accuracy Train (s) Test (s) Manning (2011) 97.28 – – Collobert et al. (2011) 97.29 – – Sun (2014) 97.36 – – Søgaard (2011) 97.50 – – Huang et al. (2015) 97.55 – – Ma and Hovy (2016) 97.55 – – Yang et al. (2017) 97.55 – – BiLSTM 97.35 254 22.50 2 stacked BiLSTM 97.41 501 43.99 3 stacked BiLSTM 97.40 746 64.96 S-LSTM 97.55 237 22.16 Table 6: Results on PTB (POS tagging) the number of recurrent steps on the respective development sets for sequence labelling. The POS accuracies and NER F1-scores against the number of recurrent steps are shown in Figure 3 (a) and (b), respectively. For POS tagging, the best step number is set to 7, with a development accuracy of 97.58%. For NER, the step number is set to 9, with a development F1-score of 94.98%. As can be seen in Table 6, S-LSTM gives significantly better results compared with BiLSTM on the WSJ dataset. It also gives competitive accuracies as compared with existing methods in the literature. Stacking two layers of BiLSTMs leads to improved results compared to one-layer BiLSTM, but the accuracy does not further improve Model F1 Train (s) Test (s) Collobert et al. (2011) 89.59 – – Passos et al. (2014) 90.90 – – Luo et al. (2015) 91.20 – – Huang et al. (2015) 90.10 – – Lample et al. (2016) 90.94 – – Ma and Hovy (2016) 91.21 – – Yang et al. (2017) 91.26 – – Rei (2017) 86.26 – – Peters et al. (2017) 91.93 – – BiLSTM 90.96 82 9.89 2 stacked BiLSTM 91.02 159 18.88 3 stacked BiLSTM 91.06 235 30.97 S-LSTM 91.57* 79 9.78 Table 7: Results on CoNLL03 (NER) with three layers of stacked LSTMs. For NER (Table 7), S-LSTM gives an F1-score of 91.57% on the CoNLL test set, which is significantly better compared with BiLSTMs. Stacking more layers of BiLSTMs leads to slightly better F1-scores compared with a single-layer BiLSTM. Our BiLSTM results are comparable to the results reported by Ma and Hovy (2016) and Lample et al. (2016), who also use bidirectional RNNCRF structures. In contrast, S-LSTM gives the best reported results under the same settings. In the second section of Table 7, Yang et al. (2017) use cross-domain data, obtaining an Fscore of 91.26%; Rei (2017) perform multi-task 325 10 20 30 40 50 60 Length 0.70 0.75 0.80 0.85 0.90 Accuracy BiLSTM S-LSTM (a) Movie review 20 40 60 80 100 120 Length 0.92 0.93 0.94 0.95 0.96 0.97 0.98 0.99 1.00 F1 BiLSTM S-LSTM (b) CoNLL03 Figure 4: Accuracies against sentence length. learning using additional language model objectives, obtaining an F-score of 86.26%; Peters et al. (2017) leverage character-level language models, obtaining an F-score of 91.93%, which is the current best result on the dataset. All the three models are based on BiLSTM-CRF. On the other hand, these semi-supervised learning techniques are orthogonal to our work, and can potentially be used for S-LSTM also. 4.5 Analysis Figure 4 (a) and (b) show the accuracies against the sentence length on the movie review and CoNLL datasets, respectively, where test samples are binned in batches of 80. We find that the performances of both S-LSTM and BiLSTM decrease as the sentence length increases. On the other hand, S-LSTM demonstrates relatively better robustness compared to BiLSTMs. This confirms our intuition that a sentence-level node can facilitate better non-local communication. Figure 5 shows the training time per epoch of S-LSTM and BiLSTM on sentences with different lengths on the 16 classification datasets. To make 16.7 29.9 43.8 59.4 76.7 97.6 124.4161.6226.8484.3 Avg Length 0 100 200 300 400 500 600 Time (s) BiLSTM S-LSTM Figure 5: Time against sentence length. these comparisons, we mix all training instances, order them by the size, and put them into 10 equal groups, the medium sentence lengths of which are shown. As can be seen from the figure, the speed advantage of S-LSTM is larger when the size of the input text increases, thanks to a fixed number of recurrent steps. Similar to hierarchical attention (Vaswani et al., 2017), there is a relative disadvantage of S-LSTM in comparison with BiLSTM, which is that the memory consumption is relatively larger. For example, over the movie review development set, the actual GPU memory consumption by S-LSTM, BiLSTM, 2-layer stacked BiLSTM and 4-layer stacked BiLSTM are 252M, 89M, 146M and 253M, respectively. This is due to the fact that computation is performed in parallel by S-LSTM and hierarchical attention. 5 Conclusion We have investigated S-LSTM, a recurrent neural network for encoding sentences, which offers richer contextual information exchange with more parallelism compared to BiLSTMs. Results on a range of classification and sequence labelling tasks show that S-LSTM outperforms BiLSTMs using the same number of parameters, demonstrating that S-LSTM can be a useful addition to the neural toolbox for encoding sentences. The structural nature in S-LSTM states allows straightforward extension to tree structures, resulting in highly parallelisable tree LSTMs. We leave such investigation to future work. Next directions also include the investigation of S-LSTM to more NLP tasks, such as machine translation. Acknowledge We thank the anonymous reviewers for their constructive and thoughtful comments. 326 References Dzmitry Bahdanau, Kyunghyun Cho, and Yoshua Bengio. 2015. Neural machine translation by jointly learning to align and translate. In ICLR 2015. Joost Bastings, Ivan Titov, Wilker Aziz, Diego Marcheggiani, and Khalil Simaan. 2017. Graph convolutional encoders for syntax-aware neural machine translation. In Proceedings of EMNLP 2017. Copenhagen, Denmark, pages 1957–1967. Ronan Collobert, Jason Weston, L´eon Bottou, Michael Karlen, Koray Kavukcuoglu, and Pavel Kuksa. 2011. Natural language processing (almost) from scratch. JMLR 12(Aug):2493–2537. Yann N Dauphin, Angela Fan, Michael Auli, and David Grangier. 2017. Language modeling with gated convolutional networks. In ICML. pages 933–941. Timothy Dozat and Christopher D Manning. 2017. Deep biaffine attention for neural dependency parsing. In ICLR 2017. Chris Dyer, Miguel Ballesteros, Wang Ling, Austin Matthews, and Noah A. Smith. 2015. Transitionbased dependency parsing with stack long shortterm memory. In Proceedings of ACL 2015. Beijing, China, pages 334–343. Alex Graves and J¨urgen Schmidhuber. 2005. Framewise phoneme classification with bidirectional lstm and other neural network architectures. Neural Networks pages 602–610. Karl Moritz Hermann, Tomas Kocisky, Edward Grefenstette, Lasse Espeholt, Will Kay, Mustafa Suleyman, and Phil Blunsom. 2015. Teaching machines to read and comprehend. In NIPS. pages 1693–1701. Sepp Hochreiter and J¨urgen Schmidhuber. 1997. Long short-term memory. Neural computation 9(8):1735–1780. Zhiheng Huang, Wei Xu, and Kai Yu. 2015. Bidirectional lstm-crf models for sequence tagging. arXiv preprint arXiv:1508.01991 . Nal Kalchbrenner, Edward Grefenstette, and Phil Blunsom. 2014. A convolutional neural network for modelling sentences. In Proceedings of ACL 2014. Baltimore, Maryland, pages 655–665. Yoon Kim. 2014. Convolutional neural networks for sentence classification. In Proceedings of EMNLP 2014. Doha, Qatar, pages 1746–1751. Diederik P Kingma and Jimmy Ba. 2014. Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980 . Philipp Koehn and Rebecca Knowles. 2017. Six challenges for neural machine translation. In Proceedings of the First Workshop on Neural Machine Translation. Vancouver, pages 28–39. Alex Krizhevsky, Ilya Sutskever, and Geoffrey E Hinton. 2012. Imagenet classification with deep convolutional neural networks. In NIPS. pages 1097– 1105. Guillaume Lample, Miguel Ballesteros, Sandeep Subramanian, Kazuya Kawakami, and Chris Dyer. 2016. Neural architectures for named entity recognition. In Proceedings of the 2016 NAACL. San Diego, California, pages 260–270. Yann LeCun, Yoshua Bengio, et al. 1995. Convolutional networks for images, speech, and time series. The handbook of brain theory and neural networks 3361(10):1995. Yujia Li, Daniel Tarlow, Marc Brockschmidt, and Richard Zemel. 2016. Gated graph sequence neural networks. In ICLR 2016. Xiaodan Liang, Xiaohui Shen, Jiashi Feng, Liang Lin, and Shuicheng Yan. 2016. Semantic object parsing with graph lstm. In ECCV. Springer, pages 125– 143. Pengfei Liu, Xipeng Qiu, and Xuanjing Huang. 2017. Adversarial multi-task learning for text classification. In Proceedings of ACL 2017. Vancouver, Canada, pages 1–10. Gang Luo, Xiaojiang Huang, Chin-Yew Lin, and Zaiqing Nie. 2015. Joint entity recognition and disambiguation. In Proceedings of EMNLP 2015. pages 879–888. Xuezhe Ma and Eduard Hovy. 2016. End-to-end sequence labeling via bi-directional LSTM-CNNsCRF. In Proceedings of ACL 2016. Berlin, Germany, pages 1064–1074. Christopher D Manning. 2011. Part-of-speech tagging from 97% to 100%: is it time for some linguistics? In CICLing. Springer, pages 171–189. Diego Marcheggiani and Ivan Titov. 2017. Encoding sentences with graph convolutional networks for semantic role labeling. In Proceedings of EMNLP 2017. Copenhagen, Denmark, pages 1506–1515. Mitchell P Marcus, Mary Ann Marcinkiewicz, and Beatrice Santorini. 1993. Building a large annotated corpus of english: The penn treebank. Computational linguistics 19(2):313–330. Kevin P Murphy, Yair Weiss, and Michael I Jordan. 1999. Loopy belief propagation for approximate inference: An empirical study. In UAI. Morgan Kaufmann Publishers Inc., pages 467–475. Bo Pang and Lillian Lee. 2008. Opinion mining and sentiment analysis. Foundations and Trends R⃝in Information Retrieval 2(1–2):1–135. George Papandreou, Liang-Chieh Chen, Kevin Murphy, and Alan L Yuille. 2015. Weakly-and semisupervised learning of a dcnn for semantic image segmentation. arXiv preprint arXiv:1502.02734 . 327 Alexandre Passos, Vineet Kumar, and Andrew McCallum. 2014. Lexicon infused phrase embeddings for named entity resolution. In CoNLL. Ann Arbor, Michigan, pages 78–86. Nanyun Peng, Hoifung Poon, Chris Quirk, Kristina Toutanova, and Wen-tau Yih. 2017. Cross-sentence n-ary relation extraction with graph lstms. Transactions of the Association for Computational Linguistics 5:101–115. Jeffrey Pennington, Richard Socher, and Christopher Manning. 2014. Glove: Global vectors for word representation. In Proceedings of EMNLP 2014. pages 1532–1543. Matthew Peters, Waleed Ammar, Chandra Bhagavatula, and Russell Power. 2017. Semi-supervised sequence tagging with bidirectional language models. In Proceedings of ACL 2017. Vancouver, Canada, pages 1756–1765. Qiao Qian, Minlie Huang, Jinhao Lei, and Xiaoyan Zhu. 2016. Linguistically regularized lstms for sentiment classification. arXiv preprint arXiv:1611.03949 . Lev Ratinov and Dan Roth. 2009. Design challenges and misconceptions in named entity recognition. In CoNLL. pages 147–155. Marek Rei. 2017. Semi-supervised multitask learning for sequence labeling. In Proceedings of ACL 2017. Vancouver, Canada, pages 2121–2130. Sara Sabour, Nicholas Frosst, and Geoffrey E Hinton. 2017. Dynamic routing between capsules. In NIPS. pages 3859–3869. Tjong Kim Sang, Erik F, and De Meulder Fien. 2003. Introduction to the conll-2003 shared task: Language-independent named entity recognition. In Proceedings of HLT-NAACL 2003-Volume 4. pages 142–147. Franco Scarselli, Marco Gori, Ah Chung Tsoi, Markus Hagenbuchner, and Gabriele Monfardini. 2009. The graph neural network model. IEEE Transactions on Neural Networks 20(1):61–80. Richard Socher, Brody Huval, Christopher D Manning, and Andrew Y Ng. 2012. Semantic compositionality through recursive matrix-vector spaces. In Proceedings of EMNLP 2012. pages 1201–1211. Richard Socher, Jeffrey Pennington, Eric H Huang, Andrew Y Ng, and Christopher D Manning. 2011. Semi-supervised recursive autoencoders for predicting sentiment distributions. In Proceedings of EMNLP 2011. pages 151–161. Anders Søgaard. 2011. Semisupervised condensed nearest neighbor for part-of-speech tagging. In Proceedings of ACL 2011. pages 48–52. Nitish Srivastava, Geoffrey Hinton, Alex Krizhevsky, Ilya Sutskever, and Ruslan Salakhutdinov. 2014. Dropout: A simple way to prevent neural networks from overfitting. JMLR 15(1):1929–1958. Xu Sun. 2014. Structure regularization for structured prediction. In NIPS. pages 2402–2410. Martin Sundermeyer, Ralf Schl¨uter, and Hermann Ney. 2012. Lstm neural networks for language modeling. In InterSpeech. Kai Sheng Tai, Richard Socher, and Christopher D. Manning. 2015. Improved semantic representations from tree-structured long short-term memory networks. In Proceedings of ACL 2015. Beijing, China, pages 1556–1566. Ming Tan, Cicero dos Santos, Bing Xiang, and Bowen Zhou. 2015. Lstm-based deep learning models for non-factoid answer selection. arXiv preprint arXiv:1511.04108 . Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, Łukasz Kaiser, and Illia Polosukhin. 2017. Attention is all you need. In NIPS. pages 6000–6010. Xingyou Wang, Weijie Jiang, and Zhiyong Luo. 2016. Combination of convolutional and recurrent neural network for sentiment analysis of short texts. In Proceedings of COLING 2016. pages 2428–2437. Tsung-Hsien Wen, Milica Gasic, Nikola Mrkˇsi´c, PeiHao Su, David Vandyke, and Steve Young. 2015. Semantically conditioned lstm-based natural language generation for spoken dialogue systems. In Proceedings of EMNLP 2015. Lisbon, Portugal, pages 1711–1721. Zhilin Yang, Ruslan Salakhutdinov, and William W Cohen. 2017. Transfer learning for sequence tagging with hierarchical recurrent networks. In ICLR 2017. Zichao Yang, Diyi Yang, Chris Dyer, Xiaodong He, Alex Smola, and Eduard Hovy. 2016. Hierarchical attention networks for document classification. In Proceedings of NAACL 2016. pages 1480–1489.
2018
30
Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Long Papers), pages 328–339 Melbourne, Australia, July 15 - 20, 2018. c⃝2018 Association for Computational Linguistics 328 Universal Language Model Fine-tuning for Text Classification Jeremy Howard∗ fast.ai University of San Francisco [email protected] Sebastian Ruder∗ Insight Centre, NUI Galway Aylien Ltd., Dublin [email protected] Abstract Inductive transfer learning has greatly impacted computer vision, but existing approaches in NLP still require task-specific modifications and training from scratch. We propose Universal Language Model Fine-tuning (ULMFiT), an effective transfer learning method that can be applied to any task in NLP, and introduce techniques that are key for fine-tuning a language model. Our method significantly outperforms the state-of-the-art on six text classification tasks, reducing the error by 1824% on the majority of datasets. Furthermore, with only 100 labeled examples, it matches the performance of training from scratch on 100× more data. We opensource our pretrained models and code1. 1 Introduction Inductive transfer learning has had a large impact on computer vision (CV). Applied CV models (including object detection, classification, and segmentation) are rarely trained from scratch, but instead are fine-tuned from models that have been pretrained on ImageNet, MS-COCO, and other datasets (Sharif Razavian et al., 2014; Long et al., 2015a; He et al., 2016; Huang et al., 2017). Text classification is a category of Natural Language Processing (NLP) tasks with real-world applications such as spam, fraud, and bot detection (Jindal and Liu, 2007; Ngai et al., 2011; Chu et al., 2012), emergency response (Caragea et al., 2011), and commercial document classification, such as for legal discovery (Roitblat et al., 2010). 1http://nlp.fast.ai/ulmfit. ⋆Equal contribution. Jeremy focused on the algorithm development and implementation, Sebastian focused on the experiments and writing. While Deep Learning models have achieved state-of-the-art on many NLP tasks, these models are trained from scratch, requiring large datasets, and days to converge. Research in NLP focused mostly on transductive transfer (Blitzer et al., 2007). For inductive transfer, fine-tuning pretrained word embeddings (Mikolov et al., 2013), a simple transfer technique that only targets a model’s first layer, has had a large impact in practice and is used in most state-of-the-art models. Recent approaches that concatenate embeddings derived from other tasks with the input at different layers (Peters et al., 2017; McCann et al., 2017; Peters et al., 2018) still train the main task model from scratch and treat pretrained embeddings as fixed parameters, limiting their usefulness. In light of the benefits of pretraining (Erhan et al., 2010), we should be able to do better than randomly initializing the remaining parameters of our models. However, inductive transfer via finetuning has been unsuccessful for NLP (Mou et al., 2016). Dai and Le (2015) first proposed finetuning a language model (LM) but require millions of in-domain documents to achieve good performance, which severely limits its applicability. We show that not the idea of LM fine-tuning but our lack of knowledge of how to train them effectively has been hindering wider adoption. LMs overfit to small datasets and suffered catastrophic forgetting when fine-tuned with a classifier. Compared to CV, NLP models are typically more shallow and thus require different fine-tuning methods. We propose a new method, Universal Language Model Fine-tuning (ULMFiT) that addresses these issues and enables robust inductive transfer learning for any NLP task, akin to fine-tuning ImageNet models: The same 3-layer LSTM architecture— with the same hyperparameters and no additions other than tuned dropout hyperparameters— outperforms highly engineered models and trans329 fer learning approaches on six widely studied text classification tasks. On IMDb, with 100 labeled examples, ULMFiT matches the performance of training from scratch with 10× and—given 50k unlabeled examples—with 100× more data. Contributions Our contributions are the following: 1) We propose Universal Language Model Fine-tuning (ULMFiT), a method that can be used to achieve CV-like transfer learning for any task for NLP. 2) We propose discriminative fine-tuning, slanted triangular learning rates, and gradual unfreezing, novel techniques to retain previous knowledge and avoid catastrophic forgetting during fine-tuning. 3) We significantly outperform the state-of-the-art on six representative text classification datasets, with an error reduction of 18-24% on the majority of datasets. 4) We show that our method enables extremely sample-efficient transfer learning and perform an extensive ablation analysis. 5) We make the pretrained models and our code available to enable wider adoption. 2 Related work Transfer learning in CV Features in deep neural networks in CV have been observed to transition from task-specific to general from the first to the last layer (Yosinski et al., 2014). For this reason, most work in CV focuses on transferring the last layers of the model (Long et al., 2015b). Sharif Razavian et al. (2014) achieve state-of-theart results using features of an ImageNet model as input to a simple classifier. In recent years, this approach has been superseded by fine-tuning either the last (Donahue et al., 2014) or several of the last layers of a pretrained model and leaving the remaining layers frozen (Long et al., 2015a). Hypercolumns In NLP, only recently have methods been proposed that go beyond transferring word embeddings. The prevailing approach is to pretrain embeddings that capture additional context via other tasks. Embeddings at different levels are then used as features, concatenated either with the word embeddings or with the inputs at intermediate layers. This method is known as hypercolumns (Hariharan et al., 2015) in CV2 and is used by Peters et al. (2017), Peters et al. (2018), Wieting and Gimpel (2017), Conneau 2A hypercolumn at a pixel in CV is the vector of activations of all CNN units above that pixel. In analogy, a hypercolumn for a word or sentence in NLP is the concatenation of embeddings at different layers in a pretrained model. et al. (2017), and McCann et al. (2017) who use language modeling, paraphrasing, entailment, and Machine Translation (MT) respectively for pretraining. Specifically, Peters et al. (2018) require engineered custom architectures, while we show state-of-the-art performance with the same basic architecture across a range of tasks. In CV, hypercolumns have been nearly entirely superseded by end-to-end fine-tuning (Long et al., 2015a). Multi-task learning A related direction is multi-task learning (MTL) (Caruana, 1993). This is the approach taken by Rei (2017) and Liu et al. (2018) who add a language modeling objective to the model that is trained jointly with the main task model. MTL requires the tasks to be trained from scratch every time, which makes it inefficient and often requires careful weighting of the taskspecific objective functions (Chen et al., 2017). Fine-tuning Fine-tuning has been used successfully to transfer between similar tasks, e.g. in QA (Min et al., 2017), for distantly supervised sentiment analysis (Severyn and Moschitti, 2015), or MT domains (Sennrich et al., 2015) but has been shown to fail between unrelated ones (Mou et al., 2016). Dai and Le (2015) also fine-tune a language model, but overfit with 10k labeled examples and require millions of in-domain documents for good performance. In contrast, ULMFiT leverages general-domain pretraining and novel finetuning techniques to prevent overfitting even with only 100 labeled examples and achieves state-ofthe-art results also on small datasets. 3 Universal Language Model Fine-tuning We are interested in the most general inductive transfer learning setting for NLP (Pan and Yang, 2010): Given a static source task TS and any target task TT with TS ̸= TT , we would like to improve performance on TT . Language modeling can be seen as the ideal source task and a counterpart of ImageNet for NLP: It captures many facets of language relevant for downstream tasks, such as long-term dependencies (Linzen et al., 2016), hierarchical relations (Gulordava et al., 2018), and sentiment (Radford et al., 2017). In contrast to tasks like MT (McCann et al., 2017) and entailment (Conneau et al., 2017), it provides data in near-unlimited quantities for most domains and languages. Additionally, a pretrained LM can be easily adapted to the idiosyncrasies of a target 330 1/1 dollar The gold or Embedding layer Layer 1 Layer 2 Layer 3 Softmax layer gold (a) LM pre-training 1/1 scene The best ever Embedding layer Layer 1 Layer 2 Layer 3 Softmax layer (b) LM fine-tuning 1/1 scene The best ever Embedding layer Layer 1 Layer 2 Layer 3 Softmax layer (c) Classifier fine-tuning Figure 1: ULMFiT consists of three stages: a) The LM is trained on a general-domain corpus to capture general features of the language in different layers. b) The full LM is fine-tuned on target task data using discriminative fine-tuning (‘Discr’) and slanted triangular learning rates (STLR) to learn task-specific features. c) The classifier is fine-tuned on the target task using gradual unfreezing, ‘Discr’, and STLR to preserve low-level representations and adapt high-level ones (shaded: unfreezing stages; black: frozen). task, which we show significantly improves performance (see Section 5). Moreover, language modeling already is a key component of existing tasks such as MT and dialogue modeling. Formally, language modeling induces a hypothesis space H that should be useful for many other NLP tasks (Vapnik and Kotz, 1982; Baxter, 2000). We propose Universal Language Model Finetuning (ULMFiT), which pretrains a language model (LM) on a large general-domain corpus and fine-tunes it on the target task using novel techniques. The method is universal in the sense that it meets these practical criteria: 1) It works across tasks varying in document size, number, and label type; 2) it uses a single architecture and training process; 3) it requires no custom feature engineering or preprocessing; and 4) it does not require additional in-domain documents or labels. In our experiments, we use the state-of-theart language model AWD-LSTM (Merity et al., 2017a), a regular LSTM (with no attention, short-cut connections, or other sophisticated additions) with various tuned dropout hyperparameters. Analogous to CV, we expect that downstream performance can be improved by using higherperformance language models in the future. ULMFiT consists of the following steps, which we show in Figure 1: a) General-domain LM pretraining (§3.1); b) target task LM fine-tuning (§3.2); and c) target task classifier fine-tuning (§3.3). We discuss these in the following sections. 3.1 General-domain LM pretraining An ImageNet-like corpus for language should be large and capture general properties of language. We pretrain the language model on Wikitext-103 (Merity et al., 2017b) consisting of 28,595 preprocessed Wikipedia articles and 103 million words. Pretraining is most beneficial for tasks with small datasets and enables generalization even with 100 labeled examples. We leave the exploration of more diverse pretraining corpora to future work, but expect that they would boost performance. While this stage is the most expensive, it only needs to be performed once and improves performance and convergence of downstream models. 3.2 Target task LM fine-tuning No matter how diverse the general-domain data used for pretraining is, the data of the target task will likely come from a different distribution. We thus fine-tune the LM on data of the target task. Given a pretrained general-domain LM, this stage converges faster as it only needs to adapt to the idiosyncrasies of the target data, and it allows us to train a robust LM even for small datasets. We propose discriminative fine-tuning and slanted triangular learning rates for fine-tuning the LM, which we introduce in the following. Discriminative fine-tuning As different layers capture different types of information (Yosinski et al., 2014), they should be fine-tuned to different extents. To this end, we propose a novel fine331 tuning method, discriminative fine-tuning3. Instead of using the same learning rate for all layers of the model, discriminative fine-tuning allows us to tune each layer with different learning rates. For context, the regular stochastic gradient descent (SGD) update of a model’s parameters θ at time step t looks like the following (Ruder, 2016): θt = θt−1 −η · ∇θJ(θ) (1) where η is the learning rate and ∇θJ(θ) is the gradient with regard to the model’s objective function. For discriminative fine-tuning, we split the parameters θ into {θ1, . . . , θL} where θl contains the parameters of the model at the l-th layer and L is the number of layers of the model. Similarly, we obtain {η1, . . . , ηL} where ηl is the learning rate of the l-th layer. The SGD update with discriminative finetuning is then the following: θl t = θl t−1 −ηl · ∇θlJ(θ) (2) We empirically found it to work well to first choose the learning rate ηL of the last layer by fine-tuning only the last layer and using ηl−1 = ηl/2.6 as the learning rate for lower layers. Slanted triangular learning rates For adapting its parameters to task-specific features, we would like the model to quickly converge to a suitable region of the parameter space in the beginning of training and then refine its parameters. Using the same learning rate (LR) or an annealed learning rate throughout training is not the best way to achieve this behaviour. Instead, we propose slanted triangular learning rates (STLR), which first linearly increases the learning rate and then linearly decays it according to the following update schedule, which can be seen in Figure 2: cut = ⌊T · cut frac⌋ p = ( t/cut, if t < cut 1 − t−cut cut·(ratio−1), otherwise ηt = ηmax · 1 + p · (ratio −1) ratio (3) where T is the number of training iterations4, cut frac is the fraction of iterations we increase 3 An unrelated method of the same name exists for deep Boltzmann machines (Salakhutdinov and Hinton, 2009). 4In other words, the number of epochs times the number of updates per epoch. the LR, cut is the iteration when we switch from increasing to decreasing the LR, p is the fraction of the number of iterations we have increased or will decrease the LR respectively, ratio specifies how much smaller the lowest LR is from the maximum LR ηmax, and ηt is the learning rate at iteration t. We generally use cut frac = 0.1, ratio = 32 and ηmax = 0.01. STLR modifies triangular learning rates (Smith, 2017) with a short increase and a long decay period, which we found key for good performance.5 In Section 5, we compare against aggressive cosine annealing, a similar schedule that has recently been used to achieve state-of-the-art performance in CV (Loshchilov and Hutter, 2017).6 Figure 2: The slanted triangular learning rate schedule used for ULMFiT as a function of the number of training iterations. 3.3 Target task classifier fine-tuning Finally, for fine-tuning the classifier, we augment the pretrained language model with two additional linear blocks. Following standard practice for CV classifiers, each block uses batch normalization (Ioffe and Szegedy, 2015) and dropout, with ReLU activations for the intermediate layer and a softmax activation that outputs a probability distribution over target classes at the last layer. Note that the parameters in these task-specific classifier layers are the only ones that are learned from scratch. The first linear layer takes as the input the pooled last hidden layer states. Concat pooling The signal in text classification tasks is often contained in a few words, which may 5We also credit personal communication with the author. 6While Loshchilov and Hutter (2017) use multiple annealing cycles, we generally found one cycle to work best. 332 occur anywhere in the document. As input documents can consist of hundreds of words, information may get lost if we only consider the last hidden state of the model. For this reason, we concatenate the hidden state at the last time step hT of the document with both the max-pooled and the mean-pooled representation of the hidden states over as many time steps as fit in GPU memory H = {h1, . . . , hT }: hc = [hT , maxpool(H), meanpool(H)] (4) where [] is concatenation. Fine-tuning the target classifier is the most critical part of the transfer learning method. Overly aggressive fine-tuning will cause catastrophic forgetting, eliminating the benefit of the information captured through language modeling; too cautious fine-tuning will lead to slow convergence (and resultant overfitting). Besides discriminative finetuning and triangular learning rates, we propose gradual unfreezing for fine-tuning the classifier. Gradual unfreezing Rather than fine-tuning all layers at once, which risks catastrophic forgetting, we propose to gradually unfreeze the model starting from the last layer as this contains the least general knowledge (Yosinski et al., 2014): We first unfreeze the last layer and fine-tune all unfrozen layers for one epoch. We then unfreeze the next lower frozen layer and repeat, until we finetune all layers until convergence at the last iteration. This is similar to ‘chain-thaw’ (Felbo et al., 2017), except that we add a layer at a time to the set of ‘thawed’ layers, rather than only training a single layer at a time. While discriminative fine-tuning, slanted triangular learning rates, and gradual unfreezing all are beneficial on their own, we show in Section 5 that they complement each other and enable our method to perform well across diverse datasets. BPTT for Text Classification (BPT3C) Language models are trained with backpropagation through time (BPTT) to enable gradient propagation for large input sequences. In order to make fine-tuning a classifier for large documents feasible, we propose BPTT for Text Classification (BPT3C): We divide the document into fixedlength batches of size b. At the beginning of each batch, the model is initialized with the final state of the previous batch; we keep track of the hidden states for mean and max-pooling; gradients Dataset Type # classes # examples TREC-6 Question 6 5.5k IMDb Sentiment 2 25k Yelp-bi Sentiment 2 560k Yelp-full Sentiment 5 650k AG Topic 4 120k DBpedia Topic 14 560k Table 1: Text classification datasets and tasks with number of classes and training examples. are back-propagated to the batches whose hidden states contributed to the final prediction. In practice, we use variable length backpropagation sequences (Merity et al., 2017a). Bidirectional language model Similar to existing work (Peters et al., 2017, 2018), we are not limited to fine-tuning a unidirectional language model. For all our experiments, we pretrain both a forward and a backward LM. We fine-tune a classifier for each LM independently using BPT3C and average the classifier predictions. 4 Experiments While our approach is equally applicable to sequence labeling tasks, we focus on text classification tasks in this work due to their important realworld applications. 4.1 Experimental setup Datasets and tasks We evaluate our method on six widely-studied datasets, with varying numbers of documents and varying document length, used by state-of-the-art text classification and transfer learning approaches (Johnson and Zhang, 2017; McCann et al., 2017) as instances of three common text classification tasks: sentiment analysis, question classification, and topic classification. We show the statistics for each dataset and task in Table 1. Sentiment Analysis For sentiment analysis, we evaluate our approach on the binary movie review IMDb dataset (Maas et al., 2011) and on the binary and five-class version of the Yelp review dataset compiled by Zhang et al. (2015). Question Classification We use the six-class version of the small TREC dataset (Voorhees and Tice, 1999) dataset of open-domain, fact-based questions divided into broad semantic categories. 333 Model Test Model Test IMDb CoVe (McCann et al., 2017) 8.2 TREC-6 CoVe (McCann et al., 2017) 4.2 oh-LSTM (Johnson and Zhang, 2016) 5.9 TBCNN (Mou et al., 2015) 4.0 Virtual (Miyato et al., 2016) 5.9 LSTM-CNN (Zhou et al., 2016) 3.9 ULMFiT (ours) 4.6 ULMFiT (ours) 3.6 Table 2: Test error rates (%) on two text classification datasets used by McCann et al. (2017). AG DBpedia Yelp-bi Yelp-full Char-level CNN (Zhang et al., 2015) 9.51 1.55 4.88 37.95 CNN (Johnson and Zhang, 2016) 6.57 0.84 2.90 32.39 DPCNN (Johnson and Zhang, 2017) 6.87 0.88 2.64 30.58 ULMFiT (ours) 5.01 0.80 2.16 29.98 Table 3: Test error rates (%) on text classification datasets used by Johnson and Zhang (2017). Topic classification For topic classification, we evaluate on the large-scale AG news and DBpedia ontology datasets created by Zhang et al. (2015). Pre-processing We use the same pre-processing as in earlier work (Johnson and Zhang, 2017; McCann et al., 2017). In addition, to allow the language model to capture aspects that might be relevant for classification, we add special tokens for upper-case words, elongation, and repetition. Hyperparameters We are interested in a model that performs robustly across a diverse set of tasks. To this end, if not mentioned otherwise, we use the same set of hyperparameters across tasks, which we tune on the IMDb validation set. We use the AWD-LSTM language model (Merity et al., 2017a) with an embedding size of 400, 3 layers, 1150 hidden activations per layer, and a BPTT batch size of 70. We apply dropout of 0.4 to layers, 0.3 to RNN layers, 0.4 to input embedding layers, 0.05 to embedding layers, and weight dropout of 0.5 to the RNN hidden-to-hidden matrix. The classifier has a hidden layer of size 50. We use Adam with β1 = 0.7 instead of the default β1 = 0.9 and β2 = 0.99, similar to (Dozat and Manning, 2017). We use a batch size of 64, a base learning rate of 0.004 and 0.01 for finetuning the LM and the classifier respectively, and tune the number of epochs on the validation set of each task7. We otherwise use the same practices 7On small datasets such as TREC-6, we fine-tune the LM only for 15 epochs without overfitting, while we can fine-tune longer on larger datasets. We found 50 epochs to be a good default for fine-tuning the classifier. used in (Merity et al., 2017a). Baselines and comparison models For each task, we compare against the current state-of-theart. For the IMDb and TREC-6 datasets, we compare against CoVe (McCann et al., 2017), a stateof-the-art transfer learning method for NLP. For the AG, Yelp, and DBpedia datasets, we compare against the state-of-the-art text categorization method by Johnson and Zhang (2017). 4.2 Results For consistency, we report all results as error rates (lower is better). We show the test error rates on the IMDb and TREC-6 datasets used by McCann et al. (2017) in Table 2. Our method outperforms both CoVe, a state-of-the-art transfer learning method based on hypercolumns, as well as the state-of-the-art on both datasets. On IMDb, we reduce the error dramatically by 43.9% and 22% with regard to CoVe and the state-of-the-art respectively. This is promising as the existing stateof-the-art requires complex architectures (Peters et al., 2018), multiple forms of attention (McCann et al., 2017) and sophisticated embedding schemes (Johnson and Zhang, 2016), while our method employs a regular LSTM with dropout. We note that the language model fine-tuning approach of Dai and Le (2015) only achieves an error of 7.64 vs. 4.6 for our method on IMDb, demonstrating the benefit of transferring knowledge from a large ImageNet-like corpus using our fine-tuning techniques. IMDb in particular is reflective of realworld datasets: Its documents are generally a few 334 Figure 3: Validation error rates for supervised and semi-supervised ULMFiT vs. training from scratch with different numbers of training examples on IMDb, TREC-6, and AG (from left to right). paragraphs long—similar to emails (e.g for legal discovery) and online comments (e.g for community management); and sentiment analysis is similar to many commercial applications, e.g. product response tracking and support email routing. On TREC-6, our improvement—similar as the improvements of state-of-the-art approaches—is not statistically significant, due to the small size of the 500-examples test set. Nevertheless, the competitive performance on TREC-6 demonstrates that our model performs well across different dataset sizes and can deal with examples that range from single sentences—in the case of TREC-6— to several paragraphs for IMDb. Note that despite pretraining on more than two orders of magnitude less data than the 7 million sentence pairs used by McCann et al. (2017), we consistently outperform their approach on both datasets. We show the test error rates on the larger AG, DBpedia, Yelp-bi, and Yelp-full datasets in Table 3. Our method again outperforms the state-ofthe-art significantly. On AG, we observe a similarly dramatic error reduction by 23.7% compared to the state-of-the-art. On DBpedia, Yelp-bi, and Yelp-full, we reduce the error by 4.8%, 18.2%, 2.0% respectively. 5 Analysis In order to assess the impact of each contribution, we perform a series of analyses and ablations. We run experiments on three corpora, IMDb, TREC6, and AG that are representative of different tasks, genres, and sizes. For all experiments, we split off 10% of the training set and report error rates on this validation set with unidirectional LMs. We fine-tune the classifier for 50 epochs and train all methods but ULMFiT with early stopping. Low-shot learning One of the main benefits of transfer learning is being able to train a model for Pretraining IMDb TREC-6 AG Without pretraining 5.63 10.67 5.52 With pretraining 5.00 5.69 5.38 Table 4: Validation error rates for ULMFiT with and without pretraining. a task with a small number of labels. We evaluate ULMFiT on different numbers of labeled examples in two settings: only labeled examples are used for LM fine-tuning (‘supervised’); and all task data is available and can be used to fine-tune the LM (‘semi-supervised’). We compare ULMFiT to training from scratch—which is necessary for hypercolumn-based approaches. We split off balanced fractions of the training data, keep the validation set fixed, and use the same hyperparameters as before. We show the results in Figure 3. On IMDb and AG, supervised ULMFiT with only 100 labeled examples matches the performance of training from scratch with 10× and 20× more data respectively, clearly demonstrating the benefit of general-domain LM pretraining. If we allow ULMFiT to also utilize unlabeled examples (50k for IMDb, 100k for AG), at 100 labeled examples, we match the performance of training from scratch with 50× and 100× more data on AG and IMDb respectively. On TREC-6, ULMFiT significantly improves upon training from scratch; as examples are shorter and fewer, supervised and semi-supervised ULMFiT achieve similar results. Impact of pretraining We compare using no pretraining with pretraining on WikiText-103 (Merity et al., 2017b) in Table 4. Pretraining is most useful for small and medium-sized datasets, which are most common in commercial applications. However, even for large datasets, pretraining improves performance. 335 LM IMDb TREC-6 AG Vanilla LM 5.98 7.41 5.76 AWD-LSTM LM 5.00 5.69 5.38 Table 5: Validation error rates for ULMFiT with a vanilla LM and the AWD-LSTM LM. LM fine-tuning IMDb TREC-6 AG No LM fine-tuning 6.99 6.38 6.09 Full 5.86 6.54 5.61 Full + discr 5.55 6.36 5.47 Full + discr + stlr 5.00 5.69 5.38 Table 6: Validation error rates for ULMFiT with different variations of LM fine-tuning. Impact of LM quality In order to gauge the importance of choosing an appropriate LM, we compare a vanilla LM with the same hyperparameters without any dropout8 with the AWD-LSTM LM with tuned dropout parameters in Table 5. Using our fine-tuning techniques, even a regular LM reaches surprisingly good performance on the larger datasets. On the smaller TREC-6, a vanilla LM without dropout runs the risk of overfitting, which decreases performance. Impact of LM fine-tuning We compare no finetuning against fine-tuning the full model (Erhan et al., 2010) (‘Full’), the most commonly used fine-tuning method, with and without discriminative fine-tuning (‘Discr’) and slanted triangular learning rates (‘Stlr’) in Table 6. Fine-tuning the LM is most beneficial for larger datasets. ‘Discr’ and ‘Stlr’ improve performance across all three datasets and are necessary on the smaller TREC-6, where regular fine-tuning is not beneficial. Impact of classifier fine-tuning We compare training from scratch, fine-tuning the full model (‘Full’), only fine-tuning the last layer (‘Last’) (Donahue et al., 2014), ‘Chain-thaw’ (Felbo et al., 2017), and gradual unfreezing (‘Freez’). We furthermore assess the importance of discriminative fine-tuning (‘Discr’) and slanted triangular learning rates (‘Stlr’). We compare the latter to an alternative, aggressive cosine annealing schedule (‘Cos’) (Loshchilov and Hutter, 2017). We use a learning rate ηL = 0.01 for ‘Discr’, learning rates 8To avoid overfitting, we only train the vanilla LM classifier for 5 epochs and keep dropout of 0.4 in the classifier. Classifier fine-tuning IMDb TREC-6 AG From scratch 9.93 13.36 6.81 Full 6.87 6.86 5.81 Full + discr 4.57 6.21 5.62 Last 6.49 16.09 8.38 Chain-thaw 5.39 6.71 5.90 Freez 6.37 6.86 5.81 Freez + discr 5.39 5.86 6.04 Freez + stlr 5.04 6.02 5.35 Freez + cos 5.70 6.38 5.29 Freez + discr + stlr 5.00 5.69 5.38 Table 7: Validation error rates for ULMFiT with different methods to fine-tune the classifier. of 0.001 and 0.0001 for the last and all other layers respectively for ‘Chain-thaw’ as in (Felbo et al., 2017), and a learning rate of 0.001 otherwise. We show the results in Table 7. Fine-tuning the classifier significantly improves over training from scratch, particularly on the small TREC-6. ‘Last’, the standard fine-tuning method in CV, severely underfits and is never able to lower the training error to 0. ‘Chainthaw’ achieves competitive performance on the smaller datasets, but is outperformed significantly on the large AG. ‘Freez’ provides similar performance as ‘Full’. ‘Discr’ consistently boosts the performance of ‘Full’ and ‘Freez’, except for the large AG. Cosine annealing is competitive with slanted triangular learning rates on large data, but under-performs on smaller datasets. Finally, full ULMFiT classifier fine-tuning (bottom row) achieves the best performance on IMDB and TREC-6 and competitive performance on AG. Importantly, ULMFiT is the only method that shows excellent performance across the board—and is therefore the only universal method. Classifier fine-tuning behavior While our results demonstrate that how we fine-tune the classifier makes a significant difference, fine-tuning for inductive transfer is currently under-explored in NLP as it mostly has been thought to be unhelpful (Mou et al., 2016). To better understand the fine-tuning behavior of our model, we compare the validation error of the classifier fine-tuned with ULMFiT and ‘Full’ during training in Figure 4. On all datasets, fine-tuning the full model leads to the lowest error comparatively early in training, e.g. already after the first epoch on IMDb. 336 Figure 4: Validation error rate curves for finetuning the classifier with ULMFiT and ‘Full’ on IMDb, TREC-6, and AG (top to bottom). The error then increases as the model starts to overfit and knowledge captured through pretraining is lost. In contrast, ULMFiT is more stable and suffers from no such catastrophic forgetting; performance remains similar or improves until late epochs, which shows the positive effect of the learning rate schedule. Impact of bidirectionality At the cost of training a second model, ensembling the predictions of a forward and backwards LM-classifier brings a performance boost of around 0.5–0.7. On IMDb we lower the test error from 5.30 of a single model to 4.58 for the bidirectional model. 6 Discussion and future directions While we have shown that ULMFiT can achieve state-of-the-art performance on widely used text classification tasks, we believe that language model fine-tuning will be particularly useful in the following settings compared to existing transfer learning approaches (Conneau et al., 2017; McCann et al., 2017; Peters et al., 2018): a) NLP for non-English languages, where training data for supervised pretraining tasks is scarce; b) new NLP tasks where no state-of-the-art architecture exists; and c) tasks with limited amounts of labeled data (and some amounts of unlabeled data). Given that transfer learning and particularly fine-tuning for NLP is under-explored, many future directions are possible. One possible direction is to improve language model pretraining and fine-tuning and make them more scalable: for ImageNet, predicting far fewer classes only incurs a small performance drop (Huh et al., 2016), while recent work shows that an alignment between source and target task label sets is important (Mahajan et al., 2018)—focusing on predicting a subset of words such as the most frequent ones might retain most of the performance while speeding up training. Language modeling can also be augmented with additional tasks in a multi-task learning fashion (Caruana, 1993) or enriched with additional supervision, e.g. syntax-sensitive dependencies (Linzen et al., 2016) to create a model that is more general or better suited for certain downstream tasks, ideally in a weakly-supervised manner to retain its universal properties. Another direction is to apply the method to novel tasks and models. While an extension to sequence labeling is straightforward, other tasks with more complex interactions such as entailment or question answering may require novel ways to pretrain and fine-tune. Finally, while we have provided a series of analyses and ablations, more studies are required to better understand what knowledge a pretrained language model captures, how this changes during fine-tuning, and what information different tasks require. 7 Conclusion We have proposed ULMFiT, an effective and extremely sample-efficient transfer learning method that can be applied to any NLP task. We have also proposed several novel fine-tuning techniques that in conjunction prevent catastrophic forgetting and enable robust learning across a diverse range of tasks. Our method significantly outperformed existing transfer learning techniques and the stateof-the-art on six representative text classification tasks. We hope that our results will catalyze new developments in transfer learning for NLP. Acknowledgments We thank the anonymous reviewers for their valuable feedback. Sebastian is supported by Irish Research Council Grant Number EBPPG/2014/30 and Science Foundation Ireland Grant Number SFI/12/RC/2289. 337 References Jonathan Baxter. 2000. A Model of Inductive Bias Learning. Journal of Artificial Intelligence Research 12:149–198. John Blitzer, Mark Dredze, and Fernando Pereira. 2007. Biographies, bollywood, boom-boxes and blenders: Domain adaptation for sentiment classification. Annual Meeting-Association for Computational Linguistics 45(1):440. https://doi.org/10.1109/IRPS.2011.5784441. Cornelia Caragea, Nathan McNeese, Anuj Jaiswal, Greg Traylor, Hyun-Woo Kim, Prasenjit Mitra, Dinghao Wu, Andrea H Tapia, Lee Giles, Bernard J Jansen, et al. 2011. Classifying text messages for the haiti earthquake. In Proceedings of the 8th international conference on information systems for crisis response and management (ISCRAM2011). Citeseer. Rich Caruana. 1993. Multitask learning: A knowledge-based source of inductive bias. In Proceedings of the Tenth International Conference on Machine Learning. Zhao Chen, Vijay Badrinarayanan, Chen-Yu Lee, and Andrew Rabinovich. 2017. GradNorm: Gradient Normalization for Adaptive Loss Balancing in Deep Multitask Networks pages 1–10. Zi Chu, Steven Gianvecchio, Haining Wang, and Sushil Jajodia. 2012. Detecting automation of twitter accounts: Are you a human, bot, or cyborg? IEEE Transactions on Dependable and Secure Computing 9(6):811–824. Alexis Conneau, Douwe Kiela, Holger Schwenk, Lo¨ıc Barrault, and Antoine Bordes. 2017. Supervised Learning of Universal Sentence Representations from Natural Language Inference Data. In Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing. Andrew M. Dai and Quoc V. Le. 2015. Semisupervised Sequence Learning. Advances in Neural Information Processing Systems (NIPS ’15) http://arxiv.org/abs/1511.01432. Jeff Donahue, Yangqing Jia, Oriol Vinyals, Judy Hoffman, Ning Zhang, Eric Tzeng, and Trevor Darrell. 2014. Decaf: A deep convolutional activation feature for generic visual recognition. In International conference on machine learning. pages 647–655. Timothy Dozat and Christopher D. Manning. 2017. Deep Biaffine Attention for Neural Dependency Parsing. In Proceedings of ICLR 2017. Dumitru Erhan, Yoshua Bengio, Aaron Courville, Pierre-Antoine Manzagol, Pascal Vincent, and Samy Bengio. 2010. Why does unsupervised pre-training help deep learning? Journal of Machine Learning Research 11(Feb):625–660. Bjarke Felbo, Alan Mislove, Anders Søgaard, Iyad Rahwan, and Sune Lehmann. 2017. Using millions of emoji occurrences to learn any-domain representations for detecting sentiment, emotion and sarcasm. In Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing. Kristina Gulordava, Piotr Bojanowski, Edouard Grave, Tal Linzen, and Marco Baroni. 2018. Colorless green recurrent networks dream hierarchically. In Proceedings of NAACL-HLT 2018. Bharath Hariharan, Pablo Arbel´aez, Ross Girshick, and Jitendra Malik. 2015. Hypercolumns for object segmentation and fine-grained localization. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. pages 447–456. Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. 2016. Deep Residual Learning for Image Recognition. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. Gao Huang, Zhuang Liu, Kilian Q. Weinberger, and Laurens van der Maaten. 2017. Densely Connected Convolutional Networks. In Proceedings of CVPR 2017. Minyoung Huh, Pulkit Agrawal, and Alexei A Efros. 2016. What makes ImageNet good for transfer learning? arXiv preprint arXiv:1608.08614 . Sergey Ioffe and Christian Szegedy. 2015. Batch normalization: Accelerating deep network training by reducing internal covariate shift. In International Conference on Machine Learning. pages 448–456. Nitin Jindal and Bing Liu. 2007. Review spam detection. In Proceedings of the 16th international conference on World Wide Web. ACM, pages 1189– 1190. Rie Johnson and Tong Zhang. 2016. Supervised and semi-supervised text categorization using lstm for region embeddings. In International Conference on Machine Learning. pages 526–534. Rie Johnson and Tong Zhang. 2017. Deep pyramid convolutional neural networks for text categorization. In Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers). volume 1, pages 562–570. Tal Linzen, Emmanuel Dupoux, and Yoav Goldberg. 2016. Assessing the ability of lstms to learn syntax-sensitive dependencies. arXiv preprint arXiv:1611.01368 . Liyuan Liu, Jingbo Shang, Frank Xu, Xiang Ren, Huan Gui, Jian Peng, and Jiawei Han. 2018. Empower sequence labeling with task-aware neural language model. In Proceedings of AAAI 2018. 338 Jonathan Long, Evan Shelhamer, and Trevor Darrell. 2015a. Fully convolutional networks for semantic segmentation. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. pages 3431–3440. Mingsheng Long, Yue Cao, Jianmin Wang, and Michael I. Jordan. 2015b. Learning Transferable Features with Deep Adaptation Networks. In Proceedings of the 32nd International Conference on Machine learning (ICML ’15). volume 37. Ilya Loshchilov and Frank Hutter. 2017. SGDR: Stochastic Gradient Descent with Warm Restarts. In Proceedings of the Internal Conference on Learning Representations 2017. Andrew L Maas, Raymond E Daly, Peter T Pham, Dan Huang, Andrew Y Ng, and Christopher Potts. 2011. Learning word vectors for sentiment analysis. In Proceedings of the 49th Annual Meeting of the Association for Computational Linguistics: Human Language Technologies-Volume 1. Association for Computational Linguistics, pages 142–150. Dhruv Mahajan, Ross Girshick, Vignesh Ramanathan, Kaiming He, Manohar Paluri, Yixuan Li, Ashwin Bharambe, and Laurens van der Maaten. 2018. Exploring the Limits of Weakly Supervised Pretraining . Bryan McCann, James Bradbury, Caiming Xiong, and Richard Socher. 2017. Learned in Translation: Contextualized Word Vectors. In Advances in Neural Information Processing Systems. Stephen Merity, Nitish Shirish Keskar, and Richard Socher. 2017a. Regularizing and Optimizing LSTM Language Models. arXiv preprint arXiv:1708.02182 . Stephen Merity, Caiming Xiong, James Bradbury, and Richard Socher. 2017b. Pointer Sentinel Mixture Models. In Proceedings of the International Conference on Learning Representations 2017. Tomas Mikolov, Kai Chen, Greg Corrado, and Jeffrey Dean. 2013. Distributed Representations of Words and Phrases and their Compositionality. In Advances in Neural Information Processing Systems. Sewon Min, Minjoon Seo, and Hannaneh Hajishirzi. 2017. Question Answering through Transfer Learning from Large Fine-grained Supervision Data. In Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (Short Papers). Takeru Miyato, Andrew M Dai, and Ian Goodfellow. 2016. Adversarial training methods for semi-supervised text classification. arXiv preprint arXiv:1605.07725 . Lili Mou, Zhao Meng, Rui Yan, Ge Li, Yan Xu, Lu Zhang, and Zhi Jin. 2016. How Transferable are Neural Networks in NLP Applications? Proceedings of 2016 Conference on Empirical Methods in Natural Language Processing . Lili Mou, Hao Peng, Ge Li, Yan Xu, Lu Zhang, and Zhi Jin. 2015. Discriminative neural sentence modeling by tree-based convolution. In Proceedings of the 2015 Conference on Empirical Methods in Natural Language Processing. EWT Ngai, Yong Hu, YH Wong, Yijun Chen, and Xin Sun. 2011. The application of data mining techniques in financial fraud detection: A classification framework and an academic review of literature. Decision Support Systems 50(3):559–569. Sinno Jialin Pan and Qiang Yang. 2010. A survey on transfer learning. IEEE Transactions on Knowledge and Data Engineering 22(10):1345–1359. Matthew E Peters, Waleed Ammar, Chandra Bhagavatula, and Russell Power. 2017. Semi-supervised sequence tagging with bidirectional language models. In Proceedings of ACL 2017. Matthew E Peters, Mark Neumann, Mohit Iyyer, Matt Gardner, Christopher Clark, Kenton Lee, and Luke Zettlemoyer. 2018. Deep contextualized word representations. In Proceedings of NAACL 2018. Alec Radford, Rafal Jozefowicz, and Ilya Sutskever. 2017. Learning to generate reviews and discovering sentiment. arXiv preprint arXiv:1704.01444 . Marek Rei. 2017. Semi-supervised multitask learning for sequence labeling. In Proceedings of ACL 2017. Herbert L Roitblat, Anne Kershaw, and Patrick Oot. 2010. Document categorization in legal electronic discovery: computer classification vs. manual review. Journal of the Association for Information Science and Technology 61(1):70–80. Sebastian Ruder. 2016. An overview of gradient descent optimization algorithms. arXiv preprint arXiv:1609.04747 . Ruslan Salakhutdinov and Geoffrey Hinton. 2009. Deep boltzmann machines. In Artificial Intelligence and Statistics. pages 448–455. Rico Sennrich, Barry Haddow, and Alexandra Birch. 2015. Improving neural machine translation models with monolingual data. arXiv preprint arXiv:1511.06709 . Aliaksei Severyn and Alessandro Moschitti. 2015. UNITN: Training Deep Convolutional Neural Network for Twitter Sentiment Classification. Proceedings of the 9th International Workshop on Semantic Evaluation (SemEval 2015) pages 464–469. Ali Sharif Razavian, Hossein Azizpour, Josephine Sullivan, and Stefan Carlsson. 2014. Cnn features offthe-shelf: an astounding baseline for recognition. In Proceedings of the IEEE conference on computer vision and pattern recognition. pages 806–813. 339 Leslie N Smith. 2017. Cyclical learning rates for training neural networks. In Applications of Computer Vision (WACV), 2017 IEEE Winter Conference on. IEEE, pages 464–472. Vladimir Naumovich Vapnik and Samuel Kotz. 1982. Estimation of dependences based on empirical data, volume 40. Springer-Verlag New York. Ellen M Voorhees and Dawn M Tice. 1999. The trec-8 question answering track evaluation. In TREC. volume 1999, page 82. John Wieting and Kevin Gimpel. 2017. Revisiting Recurrent Networks for Paraphrastic Sentence Embeddings. In Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (ACL 2017). Jason Yosinski, Jeff Clune, Yoshua Bengio, and Hod Lipson. 2014. How transferable are features in deep neural networks? In Advances in neural information processing systems. pages 3320–3328. Xiang Zhang, Junbo Zhao, and Yann LeCun. 2015. Character-level convolutional networks for text classification. In Advances in neural information processing systems. pages 649–657. Peng Zhou, Zhenyu Qi, Suncong Zheng, Jiaming Xu, Hongyun Bao, and Bo Xu. 2016. Text classification improved by integrating bidirectional lstm with twodimensional max pooling. In Proceedings of COLING 2016.
2018
31
Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Long Papers), pages 340–350 Melbourne, Australia, July 15 - 20, 2018. c⃝2018 Association for Computational Linguistics 340 Evaluating neural network explanation methods using hybrid documents and morphosyntactic agreement Nina Poerner, Benjamin Roth & Hinrich Sch¨utze Center for Information and Language Processing LMU Munich, Germany [email protected] Abstract The behavior of deep neural networks (DNNs) is hard to understand. This makes it necessary to explore post hoc explanation methods. We conduct the first comprehensive evaluation of explanation methods for NLP. To this end, we design two novel evaluation paradigms that cover two important classes of NLP problems: small context and large context problems. Both paradigms require no manual annotation and are therefore broadly applicable. We also introduce LIMSSE, an explanation method inspired by LIME that is designed for NLP. We show empirically that LIMSSE, LRP and DeepLIFT are the most effective explanation methods and recommend them for explaining DNNs in NLP. 1 Introduction DNNs are complex models that combine linear transformations with different types of nonlinearities. If the model is deep, i.e., has many layers, then its behavior during training and inference is notoriously hard to understand. This is a problem for both scientific methodology and real-world deployment. Scientific methodology demands that we understand our models. In the real world, a decision (e.g., “your blog post is offensive and has been removed”) by itself is often insufficient; in addition, an explanation of the decision may be required (e.g., “our system flagged the following words as offensive”). The European Union plans to mandate that intelligent systems used for sensitive applications provide such explanations (European General Data Protection Regulation, expected 2018, cf. Goodman and Flaxman (2016)). A number of post hoc explanation methods for DNNs have been proposed. Due to the complexity of the DNNs they explain, these methods are necessarily approximations and come with their own sources of error. At this point, it is not clear which of these methods to use when reliable explanations for a specific DNN architecture are needed. Definitions. (i) A task method solves an NLP problem, e.g., a GRU that predicts sentiment. (ii) An explanation method explains the behavior of a task method on a specific input. For our purpose, it is a function φ(t, k, X) that assigns real-valued relevance scores for a target class k (e.g., positive) to positions t in an input text X (e.g., “great food”). For this example, an explanation method might assign: φ(1, k, X) > φ(2, k, X). (iii) An (explanation) evaluation paradigm quantitatively evaluates explanation methods for a task method, e.g., by assigning them accuracies. Contributions. (i) We present novel evaluation paradigms for explanation methods for two classes of common NLP tasks (see §2). Crucially, neither paradigm requires manual annotations and our methodology is therefore broadly applicable. (ii) Using these paradigms, we perform a comprehensive evaluation of explanation methods for NLP (§3). We cover the most important classes of task methods, RNNs and CNNs, as well as the recently proposed Quasi-RNNs. (iii) We introduce LIMSSE (§3.6), an explanation method inspired by LIME (Ribeiro et al., tasks sentiment analysis, morphological prediction, . . . task methods CNN, GRU, LSTM, . . . explanation methods LIMSSE, LRP, DeepLIFT, . . . evaluation paradigms hybrid document, morphosyntactic agreement Table 1: Terminology with examples. 341 lrp From : kolstad @ cae.wisc.edu ( Joel Kolstad ) Subject : Re : Can Radio Freq . Be Used To Measure Distance ? [...] What is the difference between vertical and horizontal ? Gravity ? Does n’t gravity pull down the photons and cause a doppler shift or something ? ( Just kidding ! ) gradL2 1p If you find faith to be honest , show me how . David The whole denominational mindset only causes more problems , sadly . ( See section 7 for details . ) Thank you . ’The Armenians just shot and shot . Maybe coz they ’re ’quality’ cars ; - ) 200 posts/day . [...] limssems s If you find faith to be honest , show me how . David The whole denominational mindset only causes more problems , sadly . ( See section 7 for details . ) Thank you . ’The Armenians just shot and shot . Maybe coz they ’re ’quality’ cars ; - ) 200 posts/day . [...] Figure 1: Top: sci.electronics post (not hybrid). Underlined: Manual relevance ground truth. Green: evidence for sci.electronics. Task method: CNN. Bottom: hybrid newsgroup post, classified talk.politics.mideast. Green: evidence for talk.politics.mideast. Underlined: talk.politics.mideast fragment. Task method: QGRU. Italics: OOV. Bold: rmax position. See supplementary for full texts. 2016) that is designed for word-order sensitive task methods (e.g., RNNs, CNNs). We show empirically that LIMSSE, LRP (Bach et al., 2015) and DeepLIFT (Shrikumar et al., 2017) are the most effective explanation methods (§4): LRP and DeepLIFT are the most consistent methods, while LIMSSE wins the hybrid document experiment. 2 Evaluation paradigms In this section, we introduce two novel evaluation paradigms for explanation methods on two types of common NLP tasks, small context tasks and large context tasks. Small context tasks are defined as those that can be solved by finding short, self-contained indicators, such as words and phrases, and weighing them up (i.e., tasks where CNNs with pooling can be expected to perform well). We design the hybrid document paradigm for evaluating explanation methods on small context tasks. Large context tasks require the correct handling of long-distance dependencies, such as subject-verb agreement.1 We design the morphosyntactic agreement paradigm for evaluating explanation methods on large context tasks. We could also use human judgments for evaluation. While we use Mohseni and Ragan (2018)’s manual relevance benchmark for comparison, there are two issues with it: (i) Due to the cost of human labor, it is limited in size and domain. (ii) More importantly, a good explanation method should not reflect what humans attend to, but what task methods attend to. For instance, the family name “Kolstad” has 11 out of its 13 appearances in the 20 newsgroups corpus in sci.electronics posts. Thus, task methods probably learn it as a sci.electronics indicator. Indeed, the 1Consider deciding the number of [verb] in “the children in the green house said that the big telescope [verb]” vs. “the children in the green house who broke the big telescope [verb]”. The local contexts of “children” or “[verb]” do not suffice to solve this problem, instead, the large context of the entire sentence has to be considered. explanation method in Fig 1 (top) marks “Kolstad” as relevant, but the human annotator does not. 2.1 Small context: Hybrid document paradigm Given a collection of documents, hybrid documents are created by randomly concatenating document fragments. We assume that, on average, the most relevant input for a class k in a hybrid document is located in a fragment that stems from a document with gold label k. Hence, an explanation method succeeds if it places maximal relevance for k inside the correct fragment. Formally, let xt be a word inside hybrid document X that originates from a document X′ with gold label y(X′). xt’s gold label y(X, t) is set to y(X′). Let f(X) be the class assigned to the hybrid document by a task method, and let φ be an explanation method as defined above. Let rmax(X, φ) denote the position of the maximally relevant word in X for the predicted class f(X). If this maximally relevant word comes from a document with the correct gold label, the explanation method is awarded a hit: hit(φ, X) = I[y X, rmax(X, φ)  = f(X)] (1) where I[P] is 1 if P is true and 0 otherwise. In Fig 1 (bottom), the explanation method gradL2 1p places rmax outside the correct (underlined) fragment. Therefore, it does not get a hit point, while limssems s does. The pointing game accuracy of an explanation method is calculated as its total number of hit points divided by the number of possible hit points. This is a form of the pointing game paradigm from computer vision (Zhang et al., 2016). 2.2 Large context: Morphosyntactic agreement paradigm Many natural languages display morphosyntactic agreement between words v and w. A DNN that 342 graddot R s the link provided by the editor above [encourages ...] lrp the link provided by the editor above [encourages ...] limssebb the link provided by the editor above [encourages ...] gradL2 R s few if any events in history [are ...] occ1 few if any events in history [are ...] limssems s few if any events in history [are ...] Figure 2: Top: verb context classified singular. Green: evidence for singular. Task method: GRU. Bottom: verb context classified plural. Green: evidence for plural. Task method: LSTM. Underlined: subject. Bold: rmax position. predicts the agreeing feature in w should pay attention to v. For example, in the sentence “the children with the telescope are home”, the number of the verb (plural for “are”) can be predicted from the subject (“children”) without looking at the verb. If the language allows for v and w to be far apart (Fig 3, top), successful task methods have to be able to handle large contexts. Linzen et al. (2016) show that English verb number can be predicted by a unidirectional LSTM with accuracy > 99%, based on left context alone. When a task method predicts the correct number, we expect successful explanation methods to place maximal relevance on the subject: hittarget(φ, X) = I[rmax(X, φ) = target(X)] where target(X) is the location of the subject, and rmax is calculated as above. Regardless of whether the prediction is correct, we expect rmax to fall onto a noun that has the predicted number: hitfeat(φ, X) = I[feat X, rmax(X, φ)  = f(X)] where feat(X, t) is the morphological feature (here: number) of xt. In Fig 2, rmax on “link” gives a hittarget point (and a hitfeat point), rmax on “editor” gives a hitfeat point. gradL2 R s does not get any points as “history” is not a plural noun. Labels for this task can be automatically generated using part-of-speech taggers and parsers, which are available for many languages. 3 Explanation methods In this section, we define the explanation methods that will be evaluated. For our purpose, explanation methods produce word relevance scores φ(t, k, X), which are specific to a given class k and a given input X. φ(t, k, X) > φ(t′, k, X) means that xt contributed more than xt′ to the task method’s (potential) decision to classify X as k. 3.1 Gradient-based explanation methods Gradient-based explanation methods approximate the contribution of some DNN input i to some output o with o’s gradient with respect to i (Simonyan et al., 2014). In the following, we consider two output functions o(k, X), the unnormalized class score s(k, X) and the class probability p(k|X): s(k, X) = ⃗wk · ⃗h(X) + bk (2) p(k|X) = exp s(k, X)  PK k′=1 exp s(k′, X)  (3) where k is the target class, ⃗h(X) the document representation (e.g., an RNN’s final hidden layer), ⃗wk (resp. bk) k’s weight vector (resp. bias). The simple gradient of o(k, X) w.r.t. i is: grad1(i, k, X) = ∂o(k, X) ∂i (4) grad1 underestimates the importance of inputs that saturate a nonlinearity (Shrikumar et al., 2017). To address this, Sundararajan et al. (2017) integrate over all gradients on a linear interpolation α ∈[0, 1] between a baseline input ¯X (here: all-zero embeddings) and X: gradR (i, k, X) = R 1 α=0 ∂o(k, ¯X+α(X−¯X)) ∂i ∂α ≈ 1 M PM m=1 ∂o(k, ¯X+ m M (X−¯X)) ∂i (5) where M is a big enough constant (here: 50). In NLP, symbolic inputs (e.g., words) are often represented as one-hot vectors ⃗xt ∈{1, 0}|V | and embedded via a real-valued matrix: ⃗et = M⃗xt. Gradients are computed with respect to individual entries of E = [⃗e1 . . .⃗e|X|]. Bansal et al. (2016) and Hechtlinger (2016) use the L2 norm to reduce vectors of gradients to single values: φgradL2(t, k, X) = ||grad(⃗et, k, E)|| (6) where grad(⃗et, k, E) is a vector of elementwise gradients w.r.t. ⃗et. Denil et al. (2015) use the dot product of the gradient vector and the embedding2, i.e., the gradient of the “hot” entry in ⃗xt: φgraddot(t, k, X) = ⃗et · grad(⃗et, k, E) (7) We use “grad1” for Eq 4, “gradR ” for Eq 5, “p” for Eq 3, “s” for Eq 2, “L2” for Eq 6 and “dot” for Eq 7. This gives us eight explanation methods: gradL2 1s , gradL2 1p, graddot 1s , graddot 1p , gradL2 R s, gradL2 R p, graddot R s , graddot R p. 2For graddot R , replace ⃗et with ⃗et −⃗¯et. Since our baseline embeddings are all-zeros, this is equivalent. 343 3.2 Layer-wise relevance propagation Layer-wise relevance propagation (LRP) is a backpropagation-based explanation method developed for fully connected neural networks and CNNs (Bach et al., 2015) and later extended to LSTMs (Arras et al., 2017b). In this paper, we use Epsilon LRP (Eq 58, Bach et al. (2015)). Remember that the activation of neuron j, aj, is the sum of weighted upstream activations, P i aiwi,j, plus bias bj, squeezed through some nonlinearity. We denote the pre-nonlinearity activation of j as a′j. The relevance of j, R(j), is distributed to upstream neurons i proportionally to the contribution that i makes to a′j in the forward pass: R(i) = X j R(j) aiwi,j a′j + esign(a′j) (8) This ensures that relevance is conserved between layers, with the exception of relevance attributed to bj. To prevent numerical instabilities, esign(a′) returns −ϵ if a′ < 0 and ϵ otherwise. We set ϵ = .001. The full algorithm is: R(Lk′) = s(k, X)I[k′ = k] ... recursive application of Eq 8 ... φlrp(t, k, X) = dim(⃗et) X j=1 R(et,j) where L is the final layer, k the target class and R(et,j) the relevance of dimension j in the t’th embedding vector. For ϵ →0 and provided that all nonlinearities up to the unnormalized class score are relu, Epsilon LRP is equivalent to the product of input and raw score gradient (here: graddot 1s ) (Kindermans et al., 2016). In our experiments, the second requirement holds only for CNNs. Experiments by Ancona et al. (2017) (see §6) suggest that LRP does not work well for LSTMs if all neurons – including gates – participate in backpropagation. We therefore use Arras et al. (2017b)’s modification and treat sigmoid-activated gates as time step-specific weights rather than neurons. For instance, the relevance of LSTM candidate vector ⃗gt is calculated from memory vector ⃗ct and input gate vector⃗it as R(gt,d) = R(ct,d) gt,d · it,d ct,d + esign(ct,d) This is equivalent to applying Eq 8 while treating ⃗it as a diagonal weight matrix. The gate neurons in⃗it do not receive any relevance themselves. See supplementary material for formal definitions of Epsilon LRP for different architectures. 3.3 DeepLIFT DeepLIFT (Shrikumar et al., 2017) is another backpropagation-based explanation method. Unlike LRP, it does not explain s(k, X), but s(k, X)−s(k, ¯X), where ¯X is some baseline input (here: all-zero embeddings). Following Ancona et al. (2018) (Eq 4), we use this backpropagation rule: R(i) = X j R(j) aiwi,j −¯aiwi,j a′ j −¯a′ j + esign(a′ j −¯a′ j) where ¯a refers to the forward pass of the baseline. Note that the original method has a different mechanism for avoiding small denominators; we use esign for compatibility with LRP. The DeepLIFT algorithm is started with R(Lk′) = s(k, X)−s(k, ¯X)  I[k′ = k]. On gated (Q)RNNs, we proceed analogous to LRP and treat gates as weights. 3.4 Cell decomposition for gated RNNs The cell decomposition explanation method for LSTMs (Murdoch and Szlam, 2017) decomposes the unnormalized class score s(k, X) (Eq 2) into additive contributions. For every time step t, we compute how much of ⃗ct “survives” until the final step T and contributes to s(k, X). This is achieved by applying all future forget gates ⃗f, the final tanh nonlinearity, the final output gate ⃗oT , as well as the class weights of k to ⃗ct. We call this quantity “net load of t for class k”: nl(t, k, X) = ⃗wk ·  ⃗oT ⊙tanh ( T Y j=t+1 ⃗fj) ⊙⃗ct  where ⊙and Q are applied elementwise. The relevance of t is its gain in net load relative to t −1: φdecomp(t, k, X) = nl(t, k, X) −nl(t −1, k, X). For GRU, we change the definition of net load: nl(t, k, X) = ⃗wk · ( T Y j=t+1 ⃗zj) ⊙⃗ht  where ⃗z are GRU update gates. 3.5 Input perturbation methods Input perturbation methods assume that the removal or masking of relevant inputs changes the 344 output (Zeiler and Fergus, 2014). Omissionbased methods remove inputs completely (K´ad´ar et al., 2017), while occlusion-based methods replace them with a baseline (Li et al., 2016b). In computer vision, perturbations are usually applied to patches, as neighboring pixels tend to correlate (Zintgraf et al., 2017). To calculate the omitN (resp. occN) relevance of word xt, we delete (resp. occlude), one at a time, all N-grams that contain xt, and average the change in the unnormalized class score from Eq 2: φ[omit|occ]N (t, k, X) = PN j=1  s(k, [⃗e1 . . .⃗e|X|]) −s(k, [⃗e1 . . .⃗et−N−1+j]∥¯E∥[⃗et+j . . .⃗e|X|])  1 N where ⃗et are embedding vectors, ∥denotes concatenation and ¯E is either a sequence of length zero (φomit) or a sequence of N baseline (here: all-zero) embedding vectors (φocc). 3.6 LIMSSE: LIME for NLP Local Interpretable Model-agnostic Explanations (LIME) (Ribeiro et al., 2016) is a framework for explaining predictions of complex classifiers. LIME approximates the behavior of classifier f in the neighborhood of input X with an interpretable (here: linear) model. The interpretable model is trained on samples Z1 . . . ZN (here: N = 3000), which are randomly drawn from X, with “gold labels” f(Z1) . . . f(ZN). Since RNNs and CNNs respect word order, we cannot use the bag of words sampling method from the original description of LIME. Instead, we introduce Local Interpretable Model-agnostic Substring-based Explanations (LIMSSE). LIMSSE uniformly samples a length ln (here: 1 ≤ln ≤6) and a starting point sn, which define the substring Zn = [⃗xsn . . . ⃗xsn+ln−1]. To the linear model, Zn is represented by a binary vector ⃗zn ∈{0, 1}|X|, where zn,t = I[sn ≤t < sn + ln]. We learn a linear weight vector ˆ⃗vk ∈R|X|, whose entries are word relevances for k, i.e., φlimsse(t, k, X) = ˆvk,t. To optimize it, we experiment with three loss functions. The first, which we will refer to as limssebb, assumes that our DNN is a total black box that delivers only a classification: ˆ⃗vk = argmin ⃗vk X n −  log σ(⃗zn · ⃗vk)  I[f(Zn) = k] + log 1 −σ(⃗zn · ⃗vk)  I[f(Zn) ̸= k]  where f(Zn) = argmaxk′ p(k′|Zn)  . The black box approach is maximally general, but insensitive to the magnitude of evidence found in Zn. Hence, we also test magnitude-sensitive loss functions: ˆ⃗vk = argmin ⃗vk X n ⃗zn · ⃗vk −o(k, Zn) 2 where o(k, Zn) is one of s(k, Zn) or p(k|Zn). We refer to these as limssems s and limssems p . 4 Experiments 4.1 Hybrid document experiment For the hybrid document experiment, we use the 20 newsgroups corpus (topic classification) (Lang, 1995) and reviews from the 10th yelp dataset challenge (binary sentiment analysis)3. We train five DNNs per corpus: a bidirectional GRU (Cho et al., 2014), a bidirectional LSTM (Hochreiter and Schmidhuber, 1997), a 1D CNN with global max pooling (Collobert et al., 2011), a bidirectional Quasi-GRU (QGRU), and a bidirectional Quasi-LSTM (QLSTM). The Quasi-RNNs are 1D CNNs with a feature-wise gated recursive pooling layer (Bradbury et al., 2017). Word embeddings are R300 and initialized with pre-trained GloVe embeddings (Pennington et al., 2014)4. The main layer has a hidden size of 150 (bidirectional architectures: 75 dimensions per direction). For the QRNNs and CNN, we use a kernel width of 5. In all five architectures, the resulting document representation is projected to 20 (resp. two) dimensions using a fully connected layer, followed by a softmax. See supplementary material for details on training and regularization. After training, we sentence-tokenize the test sets, shuffle the sentences, concatenate ten sentences at a time and classify the resulting hybrid documents. Documents that are assigned a class that is not the gold label of at least one constituent word are discarded (yelp: < 0.1%; 20 newsgroups: 14% - 20%). On the remaining documents, we use the explanation methods from §3 to find the maximally relevant word for each prediction. The random baseline samples the maximally relevant word from a uniform distribution. For reference, we also evaluate on a human judgment benchmark (Mohseni and Ragan (2018), Table 2, C11-C15). It contains 3www.yelp.com/dataset_challenge 4http://nlp.stanford.edu/data/glove. 840B.300d.zip 345 column C01 C02 C03 C04 C05 C06 C07 C08 C09 C10 C11 C12 C13 C14 C15 C16 C17 C18 C19 C20 C21 C22 C23 C24 C25 C26 C27 hybrid document experiment man. groundtruth morphosyntactic agreement experiment hittarget hitfeat yelp 20 newsgroups 20 newsgroups f(X) = y(X) f(X) ̸= y(X) φ GRU QGRU LSTM QLSTM CNN GRU QGRU LSTM QLSTM CNN GRU QGRU LSTM QLSTM CNN GRU QGRU LSTM QLSTM GRU QGRU LSTM QLSTM GRU QGRU LSTM QLSTM gradL2 1s .61 .68 .67 .70 .68 .45 .47 .25 .33 .79 .26 .31 .07 .18 .74 .48 .23 .63 .19 .52 .27 .73 .22 .09 .11 .19 .19 gradL2 1p .57 .67 .67 .70 .74 .40 .43 .26 .34 .70 .18 .35 .07 .13 .66 .48 .22 .63 .18 .53 .26 .73 .21 .09 .09 .18 .11 gradL2 R s .71 .66 .69 .71 .70 .58 .32 .26 .21 .82 .23 .15 .11 .08 .76 .69 .67 .68 .51 .73 .70 .75 .55 .19 .22 .20 .20 gradL2 R p .71 .70 .72 .71 .77 .56 .34 .30 .23 .81 .13 .08 .14 .01 .78 .68 .77 .50 .70 .74 .82 .54 .78 .19 .21 .19 .30 graddot 1s .88 .85 .81 .77 .86 .79 .76 .59 .72 .89 .80 .70 .14 .47 .79 .81 .62 .73 .56 .85 .66 .81 .59 .42 .34 .46 .36 graddot 1p .92 .88 .84 .79 .95 .78 .72 .59 .72 .81 .71 .59 .20 .44 .69 .79 .58 .74 .54 .83 .61 .81 .56 .41 .33 .46 .35 graddot R s .84 .90 .85 .87 .87 .81 .68 .60 .68 .89 .82 .64 .21 .26 .80 .90 .87 .78 .84 .94 .92 .83 .89 .54 .51 .46 .52 graddot R p .86 .89 .84 .89 .96 .80 .69 .62 .73 .89 .80 .53 .40 .54 .78 .87 .85 .68 .84 .93 .92 .74 .93 .53 .48 .42 .51 omit1 .79 .82 .85 .87 .61 .78 .75 .54 .76 .82 .80 .48 .33 .48 .65 .81 .81 .79 .80 .86 .87 .86 .84 .43 .45 .44 .45 omit3 .89 .80 .89 .88 .59 .79 .71 .72 .81 .76 .77 .37 .36 .49 .61 .74 .77 .73 .73 .82 .84 .82 .79 .41 .45 .42 .46 omit7 .92 .88 .91 .91 .70 .79 .77 .77 .84 .84 .77 .49 .44 .55 .65 .76 .80 .66 .74 .85 .88 .78 .80 .40 .48 .43 .47 occ1 .80 .71 .74 .84 .61 .78 .73 .60 .77 .82 .77 .49 .19 .10 .65 .91 .85 .86 .86 .94 .88 .89 .88 .50 .44 .46 .47 occ3 .92 .61 .93 .85 .59 .78 .63 .74 .74 .76 .74 .37 .32 .35 .61 .74 .73 .71 .72 .78 .76 .76 .76 .43 .37 .41 .43 occ7 .92 .77 .93 .90 .70 .78 .62 .74 .77 .84 .74 .35 .43 .39 .65 .64 .65 .63 .65 .73 .73 .72 .73 .36 .35 .39 .43 decomp .79 .88 .92 .88 .75 .79 .77 .80 .54 .36 .72 .51 .84 .87 .86 .90 .90 .93 .92 .96 .52 .58 .57 .63 lrp .92 .87 .91 .84 .86 .82 .83 .79 .85 .89 .85 .72 .74 .81 .79 .90 .90 .86 .91 .95 .95 .91 .95 .58 .60 .52 .63 deeplift .91 .89 .94 .85 .87 .82 .83 .78 .84 .89 .84 .72 .70 .81 .80 .91 .90 .85 .91 .95 .95 .90 .95 .59 .59 .52 .63 limssebb .81 .82 .83 .84 .78 .78 .81 .78 .80 .84 .52 .53 .53 .54 .57 .43 .41 .44 .42 .54 .51 .56 .52 .39 .43 .42 .41 limssems s .94 .94 .93 .93 .91 .85 .87 .83 .86 .89 .85 .84 .76 .84 .82 .62 .62 .67 .63 .75 .74 .82 .75 .52 .53 .55 .53 limssems p .87 .88 .85 .86 .94 .85 .86 .83 .86 .90 .81 .80 .74 .76 .76 .62 .62 .67 .63 .75 .74 .82 .75 .51 .53 .55 .53 random .69 .67 .70 .69 .66 .20 .19 .22 .22 .21 .09 .09 .06 .06 .08 .27 .27 .27 .27 .33 .33 .33 .33 .12 .13 .12 .12 last .66 .67 .66 .67 .76 .77 .76 .77 .21 .27 .25 .26 N 7551 ≤N ≤7554 3022 ≤N ≤3230 137 ≤N ≤150 N ≈1400000 N ≈20000 Table 2: Pointing game accuracies in hybrid document experiment (left), on manually annotated benchmark (middle) and in morphosyntactic agreement experiment (right). hittarget (resp. hitfeat): maximal relevance on subject (resp. on noun with the predicted number feature). Bold: top explanation method. Underlined: within 5 points of top explanation method. 188 documents from the 20 newsgroups test set (classes sci.med and sci.electronics), with one manually created list of relevant words per document. We discard documents that are incorrectly classified (20% - 27%) and define: hit(φ, X) = I[rmax(X, φ) ∈gt(X)], where gt(X) is the manual ground truth. 4.2 Morphosyntactic agreement experiment For the morphosyntactic agreement experiment, we use automatically annotated English Wikipedia sentences by Linzen et al. (2016)5. For our purpose, a sample consists of: all words preceding the verb: X = [x1 · · · xT ]; part-of-speech (POS) tags: pos(X, t) ∈{VBZ, VBP, NN, NNS, . . .}; and the position of the subject: target(X) ∈[1, T]. The number feature is derived from the POS: feat(X, t) =      Sg if pos(X, t) ∈{VBZ, NN} Pl if pos(X, t) ∈{VBP, NNS} n/a otherwise The gold label of a sentence is the number of its verb, i.e., y(X) = feat(X, T + 1). 5www.tallinzen.net/media/rnn_ agreement/agr_50_mostcommon_10K.tsv.gz As task methods, we replicate Linzen et al. (2016)’s unidirectional LSTM (R50 randomly initialized word embeddings, hidden size 50). We also train unidirectional GRU, QGRU and QLSTM architectures with the same dimensionality. We use the explanation methods from §3 to find the most relevant word for predictions on the test set. As described in §2.2, explanation methods are awarded a hittarget (resp. hitfeat) point if this word is the subject (resp. a noun with the predicted number feature). For reference, we use a random baseline as well as a baseline that assumes that the most relevant word directly precedes the verb. 5 Discussion 5.1 Explanation methods Our experiments suggest that explanation methods for neural NLP differ in quality. As in previous work (see §6), gradient L2 norm (gradL2) performs poorly, especially on RNNs. We assume that this is due to its inability to distinguish relevances for and against k. Gradient embedding dot product (graddot) is competitive on CNN (Table 2, graddot 1p C05, graddot 1s C10, C15), presumably because relu is linear on positive inputs, so gradients are exact in346 decomp initially a pagan culture , detailed information about the return of the christian religion to the islands during the norse-era [is ...] deeplift initially a pagan culture , detailed information about the return of the christian religion to the islands during the norse-era [is ...] limssems p initially a pagan culture , detailed information about the return of the christian religion to the islands during the norse-era [is ...] lrp Your day is done . Definitely looking forward to going back . All three were outstanding ! I would highly recommend going here to anyone . We will see if anyone returns the message my boyfriend left . The price is unbelievable ! And our guys are on lunch so we ca n’t fit you in . ” It ’s good , standard froyo . The pork shoulder was THAT tender . Try it with the Tomato Basil cram sauce . limssems p Your day is done . Definitely looking forward to going back . All three were outstanding ! I would highly recommend going here to anyone . We will see if anyone returns the message my boyfriend left . The price is unbelievable ! And our guys are on lunch so we ca n’t fit you in . ” It ’s good , standard froyo . The pork shoulder was THAT tender . Try it with the Tomato Basil cram sauce . Figure 3: Top: verb context classified singular. Task method: LSTM. Bottom: hybrid yelp review, classified positive. Task method: QLSTM. stead of approximate. graddot also has decent performance for GRU (graddot 1p C01, graddot R s C{06, 11, 16, 20, 24}), perhaps because GRU hidden activations are always in [-1,1], where tanh and σ are approximately linear. Integrated gradient (gradR ) mostly outperforms simple gradient (grad1), though not consistently (C01, C07). Contrary to expectation, integration did not help much with the failure of the gradient method on LSTM on 20 newsgroups (graddot 1 vs. graddot R in C08, C13), which we had assumed to be due to saturation of tanh on large absolute activations in ⃗c. Smaller intervals may be needed to approximate the integration, however, this means additional computational cost. The gradient of s(k, X) performs better or similar to the gradient of p(k|X). The main exception is yelp (graddot 1s vs. graddot 1p , C01-C05). This is probably due to conflation by p(k|X) of evidence for k (numerator in Eq 3) and against competitor classes (denominator). In a two-class scenario, there is little incentive to keep classes separate, leading to information flow through the denominator. In future work, we will replace the twoway softmax with a one-way sigmoid such that φ(t, 0, X) := −φ(t, 1, X). LRP and DeepLIFT are the most consistent explanation methods across evaluation paradigms and task methods. (The comparatively low pointing game accuracies on the yelp QRNNs and CNN (C02, C04, C05) are probably due to the fact that they explain s(k, .) in a two-way softmax, see above.) On CNN (C05, C10, C15), LRP and graddot 1s perform almost identically, suggesting that they are indeed quasi-equivalent on this architecture (see §3.2). On (Q)RNNs, modified LRP and DeepLIFT appear to be superior to the gradient method (lrp vs. graddot 1s , deeplift vs. graddot R s , C01-C04, C06-C09, C11-C14, C16-C27). Decomposition performs well on LSTM, especially in the morphosyntactic agreement experiment, but it is inconsistent on other architectures. Gated RNNs have a long-term additive and a multiplicative pathway, and the decomposition method only detects information traveling via the additive one. Miao et al. (2016) show qualitatively that GRUs often reorganize long-term memory abruptly, which might explain the difference between LSTM and GRU. QRNNs only have additive recurrent connections; however, given that ⃗ct (resp. ⃗ht) is calculated by convolution over several time steps, decomposition relevance can be incorrectly attributed inside that window. This likely is the reason for the stark difference between the performance of decomposition on QRNNs in the hybrid document experiment and on the manually labeled data (C07, C09 vs. C12, C14). Overall, we do not recommend the decomposition method, because it fails to take into account all routes by which information can be propagated. Omission and occlusion produce inconsistent results in the hybrid document experiment. Shrikumar et al. (2017) show that perturbation methods can lack sensitivity when there are more relevant inputs than the “perturbation window” covers. In the morphosyntactic agreement experiment, omission is not competitive; we assume that this is because it interferes too much with syntactic structure. occ1 does better (esp. C16-C19), possibly because an all-zero “placeholder” is less disruptive than word removal. But despite some high scores, it is less consistent than other explanation methods. Magnitude-sensitive LIMSSE (limssems) consistently outperforms black-box LIMSSE (limssebb), which suggests that numerical outputs should be used for approximation where possible. In the hybrid document experiment, magnitude-sensitive LIMSSE outperforms the other explanation methods (exceptions: C03, C05). However, it fails in the morphosyntactic agreement experiment (C16-C27). In fact, we expect LIMSSE to be unsuited for large context 347 problems, as it cannot discover dependencies whose range is bigger than a given text sample. In Fig 3 (top), limssems p highlights any singular noun without taking into account how that noun fits into the overall syntactic structure. 5.2 Evaluation paradigms The assumptions made by our automatic evaluation paradigms have exceptions: (i) the correlation between fragment of origin and relevance does not always hold (e.g., a positive review may contain negative fragments, and will almost certainly contain neutral fragments); (ii) in morphological prediction, we cannot always expect the subject to be the only predictor for number. In Fig 2 (bottom) for example, “few” is a reasonable clue for plural despite not being a noun. This imperfect ground truth means that absolute pointing game accuracies should be taken with a grain of salt; but we argue that this does not invalidate them for comparisons. We also point out that there are characteristics of explanations that may be desirable but are not reflected by the pointing game. Consider Fig 3 (bottom). Both explanations get hit points, but the lrp explanation appears “cleaner” than limssems p , with relevance concentrated on fewer tokens. 6 Related work 6.1 Explanation methods Explanation methods can be divided into local and global methods (Doshi-Velez and Kim, 2017). Global methods infer general statements about what a DNN has learned, e.g., by clustering documents (Aubakirova and Bansal, 2016) or n-grams (K´ad´ar et al., 2017) according to the neurons that they activate. Li et al. (2016a) compare embeddings of specific words with reference points to measure how drastically they were changed during training. In computer vision, Simonyan et al. (2014) optimize the input space to maximize the activation of a specific neuron. Global explanation methods are of limited value for explaining a specific prediction as they represent average behavior. Therefore, we focus on local methods. Local explanation methods explain a decision taken for one specific input at a time. We have attempted to include all important local methods for NLP in our experiments (see §3). We do not address self-explanatory models (e.g., attention (Bahdanau et al., 2015) or rationale models (Lei et al., 2016)), as these are very specific architectures that may not be not applicable to all tasks. 6.2 Explanation evaluation According to Doshi-Velez and Kim (2017)’s taxonomy of explanation evaluation paradigms, application-grounded paradigms test how well an explanation method helps real users solve real tasks (e.g., doctors judge automatic diagnoses); human-grounded paradigms rely on proxy tasks (e.g., humans rank task methods based on explanations); functionally-grounded paradigms work without human input, like our approach. Arras et al. (2016) (cf. Samek et al. (2016)) propose a functionally-grounded explanation evaluation paradigm for NLP where words in a correctly (resp. incorrectly) classified document are deleted in descending (resp. ascending) order of relevance. They assume that the fewer words must be deleted to reduce (resp. increase) accuracy, the better the explanations. According to this metric, LRP (§3.2) outperforms gradL2 on CNNs (Arras et al., 2016) and LSTMs (Arras et al., 2017b) on 20 newsgroups. Ancona et al. (2017) perform the same experiment with a binary sentiment analysis LSTM. Their graph shows occ1, graddot 1 and graddot R tied in first place, while LRP, DeepLIFT and the gradient L1 norm lag behind. Note that their treatment of LSTM gates in LRP / DeepLIFT differs from our implementation. An issue with the word deletion paradigm is that it uses syntactically broken inputs, which may introduce artefacts (Sundararajan et al., 2017). In our hybrid document paradigm, inputs are syntactically intact (though semantically incoherent at the document level); the morphosyntactic agreement paradigm uses unmodified inputs. Another class of functionally-grounded evaluation paradigms interprets the performance of a secondary task method, on inputs that are derived from (or altered by) an explanation method, as a proxy for the quality of that explanation method. Murdoch and Szlam (2017) build a rule-based classifier from the most relevant phrases in a corpus (task method: LSTM). The classifier based on decomp (§3.4) outperforms the gradient-based classifier, which is in line with our results. Arras et al. (2017a) build document representations by summing over word embeddings weighted by relevance scores (task method: CNN). They show that K-nearest neighbor performs better on doc348 ument representations derived with LRP than on those derived with gradL2, which also matches our results. Denil et al. (2015) condense documents by extracting top-K relevant sentences, and let the original task method (CNN) classify them. The accuracy loss, relative to uncondensed documents, is smaller for graddot than for heuristic baselines. In the domain of human-based evaluation paradigms, Ribeiro et al. (2016) compare different variants of LIME (§3.6) by how well they help non-experts clean a corpus from words that lead to overfitting. Selvaraju et al. (2017) assess how well explanation methods help non-experts identify the more accurate out of two object recognition CNNs. These experiments come closer to real use cases than functionally-grounded paradigms; however, they are less scalable. 7 Summary We conducted the first comprehensive evaluation of explanation methods for NLP, an important undertaking because there is a need for understanding the behavior of DNNs. To conduct this study, we introduced evaluation paradigms for explanation methods for two classes of NLP tasks, small context tasks (e.g., topic classification) and large context tasks (e.g., morphological prediction). Neither paradigm requires manual annotations. We also introduced LIMSSE, a substring-based explanation method inspired by LIME and designed for NLP. Based on our experimental results, we recommend LRP, DeepLIFT and LIMSSE for small context tasks and LRP and DeepLIFT for large context tasks, on all five DNN architectures that we tested. On CNNs and possibly GRUs, the (integrated) gradient embedding dot product is a good alternative to DeepLIFT and LRP. 8 Code Our implementation of LIMSSE, the gradient, perturbation and decomposition methods can be found in our branch of the keras package: www.github.com/ NPoe/keras. To re-run our experiments, see scripts in www.github.com/NPoe/ neural-nlp-explanation-experiment. Our LRP implementation (same repository) is adapted from Arras et al. (2017b)6. 6https://github.com/ArrasL/LRP_for_ LSTM References Marco Ancona, Enea Ceolini, Cengiz ¨Oztireli, and Markus Gross. 2017. A unified view of gradientbased attribution methods for deep neural networks. In Conference on Neural Information Processing System, Long Beach, USA. Marco Ancona, Enea Ceolini, Cengiz ¨Oztireli, and Markus Gross. 2018. Towards better understanding of gradient-based attribution methods for deep neural networks. In International Conference on Learning Representations, Vancouver, Canada. Leila Arras, Franziska Horn, Gr´egoire Montavon, Klaus-Robert M¨uller, and Wojciech Samek. 2016. Explaining predictions of non-linear classifiers in NLP. In First Workshop on Representation Learning for NLP, pages 1–7, Berlin, Germany. Leila Arras, Franziska Horn, Gr´egoire Montavon, Klaus-Robert M¨uller, and Wojciech Samek. 2017a. What is relevant in a text document?: An interpretable machine learning approach. PloS one, 12(8):e0181142. Leila Arras, Gr´egoire Montavon, Klaus-Robert M¨uller, and Wojciech Samek. 2017b. Explaining recurrent neural network predictions in sentiment analysis. In Eighth Workshop on Computational Approaches to Subjectivity, Sentiment and Social Media Analysis, pages 159–168, Copenhagen, Denmark. Malika Aubakirova and Mohit Bansal. 2016. Interpreting neural networks to improve politeness comprehension. In Empirical Methods in Natural Language Processing, page 2035–2041, Austin, USA. Sebastian Bach, Alexander Binder, Gr´egoire Montavon, Frederick Klauschen, Klaus-Robert M¨uller, and Wojciech Samek. 2015. On pixel-wise explanations for non-linear classifier decisions by layer-wise relevance propagation. PloS one, 10(7):e0130140. Dzmitry Bahdanau, Kyunghyun Cho, and Yoshua Bengio. 2015. Neural machine translation by jointly learning to align and translate. In International Conference on Learning Representations, San Diego, USA. Trapit Bansal, David Belanger, and Andrew McCallum. 2016. Ask the GRU: Multi-task learning for deep text recommendations. In ACM Conference on Recommender Systems, pages 107–114, Boston, USA. James Bradbury, Stephen Merity, Caiming Xiong, and Richard Socher. 2017. Quasi-recurrent neural networks. In International Conference on Learning Representations, Toulon, France. Kyunghyun Cho, Bart Van Merri¨enboer, Dzmitry Bahdanau, and Yoshua Bengio. 2014. On the properties of neural machine translation: Encoder-decoder approaches. In Eighth Workshop on Syntax, Semantics and Structure in Statistical Translation, pages 103– 111, Doha, Qatar. 349 Ronan Collobert, Jason Weston, L´eon Bottou, Michael Karlen, Koray Kavukcuoglu, and Pavel Kuksa. 2011. Natural language processing (almost) from scratch. Journal of Machine Learning Research, 12(Aug):2493–2537. Misha Denil, Alban Demiraj, and Nando de Freitas. 2015. Extraction of salient sentences from labelled documents. In International Conference on Learning Representations, San Diego, USA. Finale Doshi-Velez and Been Kim. 2017. A roadmap for a rigorous science of interpretability. CoRR, abs/1702.08608. Bryce Goodman and Seth Flaxman. 2016. European union regulations on algorithmic decision-making and a “right to explanation”. In ICML Workshop on Human Interpretability in Machine Learning, pages 26–30, New York, USA. Yotam Hechtlinger. 2016. Interpretation of prediction models using the input gradient. In Conference on Neural Information Processing Systems, Barcelona, Spain. Sepp Hochreiter and J¨urgen Schmidhuber. 1997. Long short-term memory. Neural computation, 9(8):1735–1780. Akos K´ad´ar, Grzegorz Chrupała, and Afra Alishahi. 2017. Representation of linguistic form and function in recurrent neural networks. Computational Linguistics, 43(4):761–780. Pieter-Jan Kindermans, Kristof Sch¨utt, Klaus-Robert M¨uller, and Sven D¨ahne. 2016. Investigating the influence of noise and distractors on the interpretation of neural networks. In Conference on Neural Information Processing Systems, Barcelona, Spain. Ken Lang. 1995. Newsweeder: Learning to filter netnews. In International Conference on Machine Learning, pages 331–339, Tahoe City, USA. Tao Lei, Regina Barzilay, and Tommi Jaakkola. 2016. Rationalizing neural predictions. In Empirical Methods in Natural Language Processing, pages 107–117, Austin, USA. Jiwei Li, Xinlei Chen, Eduard Hovy, and Dan Jurafsky. 2016a. Visualizing and understanding neural models in NLP. In NAACL-HLT, pages 681–691, San Diego, USA. Jiwei Li, Will Monroe, and Dan Jurafsky. 2016b. Understanding neural networks through representation erasure. CoRR, abs/1612.08220. Tal Linzen, Emmanuel Dupoux, and Yoav Goldberg. 2016. Assessing the ability of LSTMs to learn syntax-sensitive dependencies. Transactions of the Association for Computational Linguistics, 4:521– 535. Yajie Miao, Jinyu Li, Yongqiang Wang, Shi-Xiong Zhang, and Yifan Gong. 2016. Simplifying long short-term memory acoustic models for fast training and decoding. In International Conference on Acoustics, Speech and Signal Processing, pages 2284–2288. Sina Mohseni and Eric D Ragan. 2018. A humangrounded evaluation benchmark for local explanations of machine learning. CoRR, abs/1801.05075. W James Murdoch and Arthur Szlam. 2017. Automatic rule extraction from long short term memory networks. In International Conference on Learning Representations, Toulon, France. Jeffrey Pennington, Richard Socher, and Christopher D. Manning. 2014. Glove: Global vectors for word representation. In Empirical Methods in Natural Language Processing (EMNLP), pages 1532– 1543, Doha, Qatar. Marco Tulio Ribeiro, Sameer Singh, and Carlos Guestrin. 2016. Why should I trust you?: Explaining the predictions of any classifier. In ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, pages 1135–1144, San Francisco, California. Wojciech Samek, Alexander Binder, Gr´egoire Montavon, Sebastian Lapuschkin, and Klaus-Robert M¨uller. 2016. Evaluating the visualization of what a deep neural network has learned. IEEE transactions on neural networks and learning systems, 28(11):2660–2673. Ramprasaath R Selvaraju, Michael Cogswell, Abhishek Das, Ramakrishna Vedantam, Devi Parikh, and Dhruv Batra. 2017. Grad-cam: Visual explanations from deep networks via gradient-based localization. In IEEE Conference on Computer Vision and Pattern Recognition, pages 618–626, Honolulu, Hawaii. Avanti Shrikumar, Peyton Greenside, and Anshul Kundaje. 2017. Learning important features through propagating activation differences. In International Conference on Machine Learning, pages 3145– 3153, Sydney, Australia. Karen Simonyan, Andrea Vedaldi, and Andrew Zisserman. 2014. Deep inside convolutional networks: Visualising image classification models and saliency maps. In International Conference on Learning Representations, Banff, Canada. Mukund Sundararajan, Ankur Taly, and Qiqi Yan. 2017. Axiomatic attribution for deep networks. In International Conference on Machine Learning, Sydney, Australia. Matthew D Zeiler and Rob Fergus. 2014. Visualizing and understanding convolutional networks. In European Conference on Computer Vision, pages 818– 833, Z¨urich, Switzerland. 350 Jianming Zhang, Zhe Lin, Jonathan Brandt, Xiaohui Shen, and Stan Sclaroff. 2016. Top-down neural attention by excitation backprop. In European Conference on Computer Vision, pages 543–559, Amsterdam, Netherlands. Luisa M Zintgraf, Taco S Cohen, Tameem Adel, and Max Welling. 2017. Visualizing deep neural network decisions: Prediction difference analysis. In International Conference on Learning Representations, Toulon, France.
2018
32
Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Long Papers), pages 351–360 Melbourne, Australia, July 15 - 20, 2018. c⃝2018 Association for Computational Linguistics 351 Improving Text-to-SQL Evaluation Methodology Catherine Finegan-Dollak1∗ Jonathan K. Kummerfeld1∗ Li Zhang1 Karthik Ramanathan2 Sesh Sadasivam1 Rui Zhang3 Dragomir Radev3 Computer Science & Engineering1 and School of Information2 Department of Computer Science3 University of Michigan, Ann Arbor Yale University {cfdollak,jkummerf}@umich.edu [email protected] Abstract To be informative, an evaluation must measure how well systems generalize to realistic unseen data. We identify limitations of and propose improvements to current evaluations of text-to-SQL systems. First, we compare human-generated and automatically generated questions, characterizing properties of queries necessary for real-world applications. To facilitate evaluation on multiple datasets, we release standardized and improved versions of seven existing datasets and one new textto-SQL dataset. Second, we show that the current division of data into training and test sets measures robustness to variations in the way questions are asked, but only partially tests how well systems generalize to new queries; therefore, we propose a complementary dataset split for evaluation of future work. Finally, we demonstrate how the common practice of anonymizing variables during evaluation removes an important challenge of the task. Our observations highlight key difficulties, and our methodology enables effective measurement of future development. 1 Introduction Effective natural language interfaces to databases (NLIDB) would give lay people access to vast amounts of data stored in relational databases. This paper identifies key oversights in current evaluation methodology for this task. In the process, we (1) introduce a new, challenging dataset, (2) standardize and fix many errors in existing datasets, and (3) propose a simple yet effective baseline system.1 ∗The first two authors contributed equally to this work. 1Code and data is available at https://github. com/jkkummerfeld/text2sql-data/ Figure 1: Traditional question-based splits allow queries to appear in both train and test. Our querybased split ensures each query is in only one. First, we consider query complexity, showing that human-written questions require more complex queries than automatically generated ones. To illustrate this challenge, we introduce Advising, a dataset of questions from university students about courses that lead to particularly complex queries. Second, we identify an issue in the way examples are divided into training and test sets. The standard approach, shown at the top of Fig. 1, divides examples based on the text of each question. As a result, many of the queries in the test set are seen in training, albeit with different entity names and with the question phrased differently. This means metrics are mainly measuring robustness to the way a set of known SQL queries can be expressed in English—still a difficult problem, but not a complete test of ability to compose new queries in a familiar domain. We introduce a template-based slot-filling baseline that cannot generalize to new queries, and yet is competitive with prior work on multiple datasets. To measure robustness to new queries, we propose splitting based on the SQL query. We show that stateof-the-art systems with excellent performance on traditional question-based splits struggle on querybased splits. We also consider the common practice of variable anonymization, which removes a 352 challenging form of ambiguity from the task. In the process, we apply extensive effort to standardize datasets and fix a range of errors. Previous NLIDB work has led to impressive systems, but current evaluations provide an incomplete picture of their strengths and weaknesses. In this paper, we provide new and improved data, a new baseline, and guidelines that complement existing metrics, supporting future work. 2 Related Work The task of generating SQL representations from English questions has been studied in the NLP and DB communities since the 1970s (Androutsopoulos et al., 1995). Our observations about evaluation methodology apply broadly to the systems cited below. Within the DB community, systems commonly use pattern matching, grammar-based techniques, or intermediate representations of the query (Pazos Rangel et al., 2013). Recent work has explored incorporating user feedback to improve accuracy (Li and Jagadish, 2014). Unfortunately, none of these systems are publicly available, and many rely on domain-specific resources. In the NLP community, there has been extensive work on semantic parsing to logical representations that query a knowledge base (Zettlemoyer and Collins, 2005; Liang et al., 2011; Beltagy et al., 2014; Berant and Liang, 2014), while work on mapping to SQL has recently increased (Yih et al., 2015; Iyer et al., 2017; Zhong et al., 2017). One of the earliest statistical models for mapping text to SQL was the PRECISE system (Popescu et al., 2003, 2004), which achieved high precision on queries that met constraints linking tokens and database values, attributes, and relations, but did not attempt to generate SQL for questions outside this class. Later work considered generating queries based on relations extracted by a syntactic parser (Giordani and Moschitti, 2012) and applying techniques from logical parsing research (Poon, 2013). However, none of these earlier systems are publicly available, and some required extensive engineering effort for each domain, such as the lexicon used by PRECISE. More recent work has produced general purpose systems that are competitive with previous results and are also available, such as Iyer et al. (2017). We also adapt a logical form parser with a sequence to tree approach that makes very few assumptions about the output structure (Dong and Lapata, 2016). One challenge for applying neural models to this task is annotating large enough datasets of question-query pairs. Recent work (Cai et al., 2017; Zhong et al., 2017) has automatically generated large datasets using templates to form random queries and corresponding natural-languagelike questions, and then having humans rephrase the question into English. Another option is to use feedback-based learning, where the system alternates between training and making predictions, which a user rates as correct or not (Iyer et al., 2017). Other work seeks to avoid the data bottleneck by using end-to-end approaches (Yin et al., 2016; Neelakantan et al., 2017), which we do not consider here. One key contribution of this paper is standardization of a range of datasets, to help address the challenge of limited data resources. 3 Data For our analysis, we study a range of text-to-SQL datasets, standardizing them to have a consistent SQL style. ATIS (Price, 1990; Dahl et al., 1994) User questions for a flight-booking task, manually annotated. We use the modified SQL from Iyer et al. (2017), which follows the data split from the logical form version (Zettlemoyer and Collins, 2007). GeoQuery (Zelle and Mooney, 1996) User questions about US geography, manually annotated with Prolog. We use the SQL version (Popescu et al., 2003; Giordani and Moschitti, 2012; Iyer et al., 2017), which follows the logical form data split (Zettlemoyer and Collins, 2005). Restaurants (Tang and Mooney, 2000; Popescu et al., 2003) User questions about restaurants, their food types, and locations. Scholar (Iyer et al., 2017) User questions about academic publications, with automatically generated SQL that was checked by asking the user if the output was correct. Academic (Li and Jagadish, 2014) Questions about the Microsoft Academic Search (MAS) database, derived by enumerating every logical query that could be expressed using the search page of the MAS website and writing sentences to match them. The domain is similar to that of Scholar, but their schemas differ. 353 Yelp and IMDB (Yaghmazadeh et al., 2017) Questions about the Yelp website and the Internet Movie Database, collected from colleagues of the authors who knew the type of information in each database, but not their schemas. WikiSQL (Zhong et al., 2017) A large collection of automatically generated questions about individual tables from Wikipedia, paraphrased by crowd workers to be fluent English. Advising (This Work) Our dataset of questions over a database of course information at the University of Michigan, but with fictional student records. Some questions were collected from the EECS department Facebook page and others were written by CS students with knowledge of the database who were instructed to write questions they might ask in an academic advising appointment. The authors manually labeled the initial set of questions with SQL. To ensure high quality, at least two annotators scored each questionquery pair on a two-point scale for accuracy— did the query generate an accurate answer to the question?—and a three-point scale for helpfulness—did the answer provide the information the asker was probably seeking? Cases with low scores were fixed or removed from the dataset. We collected paraphrases using Jiang et al. (2017)’s method, with manual inspection to ensure accuracy. For a given sentence, this produced paraphrases with the same named entities (e.g. course number EECS 123). To add variation, we annotated entities in the questions and queries with their types—such as course name, department, or instructor—and substituted randomly-selected values of each type into each paraphrase and its corresponding query. This combination of paraphrasing and entity replacement means an original question of “For next semester, who is teaching EECS 123?” can give rise to “Who teaches MATH 456 next semester?” as well as “Who’s the professor for next semester’s CHEM 789?” 3.1 SQL Canonicalization SQL writing style varies. To enable consistent training and evaluation across datasets, we canonicalized the queries: (1) we alphabetically ordered fields in SELECT, tables in FROM, and constraints in WHERE; (2) we standardized table aliases in the form <TABLE NAME>alias<N> for the Nth use of the same table in one query; and (3) we standardized Sets Identified Affected Queries ATIS 141 380 GeoQuery 17 39 Scholar 60 152 Table 1: Manually identified duplicate queries (different SQL for equivalent questions). capitalization and spaces between symbols. We confirmed these changes do not alter the meaning of the queries via unit tests of the canonicalization code and manual inspection of the output. We also manually fixed some errors, such as ambiguous mixing of AND and OR (30 ATIS queries). 3.2 Variable Annotation Existing SQL datasets do not explicitly identify which words in the question are used in the SQL query. Automatic methods to identify these variables, as used in prior work, do not account for ambiguities, such as words that could be either a city or an airport. To provide accurate anonymization, we annotated query variables using a combination of automatic and manual processing. Our automatic process extracted terms from each side of comparison operations in SQL: one side contains quoted text or numbers, and the other provides a type for those literals. Often quoted text in the query is a direct copy from the question, while in some cases we constructed dictionaries to map common acronyms, like american airlines– AA, and times, like 2pm–1400. The process flagged cases with ambiguous mappings, which we then manually processed. Often these were mistakes, which we corrected, such as missing constraints (e.g., papers in 2015 with no date limit in the query), extra constraints (e.g., limiting to a single airline despite no mention in the question), inaccurate constraints (e.g., more than 5 as > 4), and inconsistent use of this year to mean different years in different queries. 3.3 Query Deduplication Three of the datasets had many duplicate queries (i.e., semantically equivalent questions with different SQL). To avoid this spurious ambiguity we manually grouped the data into sets of equivalent questions (Table 1). A second person manually inspected every set and ran the queries. Where multiple queries are valid, we kept them all, though only used the first for the rest of this work. 354 Redundancy Measures Complexity Measures Unique Queries Tables Unique tables SELECTs Nesting Question query / pattern Pattern / query / query / query Depth count count [1]/[2] µ Max count µ Max µ Max µ Max µ Max Advising 4570 211 21.7 20.3 90 174 3.2 9 3.0 9 1.23 6 1.18 4 ATIS 5280 947 5.6 7.0 870 751 6.4 32 3.8 12 1.79 8 1.39 8 GeoQuery 877 246 3.6 8.9 327 98 1.4 5 1.1 4 1.77 8 2.03 7 Restaurants 378 23 16.4 22.2 81 17 2.6 5 2.3 4 1.17 2 1.17 2 Scholar 817 193 4.2 5.6 71 146 3.3 6 3.2 6 1.02 2 1.02 2 Academic 196 185 1.1 2.1 12 92 3.2 10 3 6 1.04 3 1.04 2 IMDB 131 89 1.5 2.5 21 52 1.9 5 1.9 5 1.01 2 1.01 2 Yelp 128 110 1.2 1.4 11 89 2.2 4 2 4 1 1 1 1 WikiSQL 80,654 77,840 1.0 165.3 42,816 488 1 1 1 1 1 1 1 1 Table 2: Descriptive statistics for text-to-SQL datasets. Datasets in the first group are human-generated from the NLP community, in the second are human-generated from the DB community, and in the third are automatically-generated. [1]/[2] is Question count / Unique query count. 4 Evaluating on Multiple Datasets Is Necessary For evaluation to be informative it must use data that is representative of real-world queries. If datasets have biases, robust comparisons of models will require evaluation on multiple datasets. For example, some datasets, such as ATIS and Advising, were collected from users and are taskoriented, while others, such as WikiSQL, were produced by automatically generating queries and engaging people to express the query in language. If these two types of datasets differ systematically, evaluation on one may not reflect performance on the other. In this section, we provide descriptive statistics aimed at understanding how several datasets differ, especially with respect to query redundancy and complexity. 4.1 Measures We consider a range of measures that capture different aspects of data complexity and diversity: Question / Unique Query Counts We measure dataset size and how many distinct queries there are when variables are anonymized. We also present the mean number of questions per unique query; a larger mean indicates greater redundancy. SQL Patterns Complexity can be described as the answer to the question, “How many queryform patterns would be required to generate this dataset?” Fig. 2 shows an example of a pattern, which essentially abstracts away from the specific table and field names. Some datasets were generated from patterns similar to these, including WikiSQL and Cai et al. (2017). This enables the generation of large numbers of queries, but limits the SELECT <table-alias>.<field> FROM <table> AS <table-alias> WHERE <table-alias>.<field> = <literal> SELECT RIVERalias0.RIVER NAME FROM RIVER AS RIVERalias0 WHERE RIVERalias0.TRAVERSE = "florida"; SELECT CITYalias0.CITY NAME FROM CITY AS CITYalias0 WHERE CITYalias0.STATE NAME = "alabama"; Figure 2: An SQL pattern and example queries. variation between them to only that encompassed by their patterns. We count the number of patterns needed to cover the full dataset, where larger numbers indicate greater diversity. We also report mean queries per pattern; here, larger numbers indicate greater redundancy, showing that many queries fit the same mold. Counting Tables We consider the total number of tables and the number of unique tables mentioned in a query. These numbers differ in the event of self-joins. In both cases, higher values imply greater complexity. Nesting A query with nested subqueries may be more complex than one without nesting. We count SELECT statements within each query to determine the number of sub-queries. We also report the depth of query nesting. In both cases, higher values imply greater complexity. 4.2 Analysis The statistics in Table 2 show several patterns. First, dataset size is not the best indicator of dataset diversity. Although WikiSQL contains fifteen times as many question-query pairs as ATIS, ATIS contains significantly more patterns than 355 WikiSQL; moreover, WikiSQL’s queries are dominated by one pattern that is more than half of the dataset (SELECT col AS result FROM table WHERE col = value). The small, hand-curated datasets developed by the database community— Academic, IMDB, and Yelp—have noticeably less redundancy as measured by questions per unique query and queries per pattern than the datasets the NLP community typically evaluates on. Second, human-generated datasets exhibit greater complexity than automatically generated data. All of the human-generated datasets except Yelp demonstrate at least some nesting. The average query from any of the human-generated datasets joins more than one table. In particular, task-oriented datasets require joins and nesting. ATIS and Advising, which were developed with air-travel and student-advising tasks in mind, respectively, both score in the top three for multiple complexity scores. To accurately predict performance on humangenerated or task-oriented questions, it is thus necessary to evaluate on datasets that test the ability to handle nesting and joins. Training and testing NLP systems, particularly deep learning-based methods, benefits from large datasets. However, at present, the largest dataset available does not provide the desired complexity. Takeaway: Evaluate on multiple datasets, some with nesting and joins, to provide a thorough picture of a system’s strengths and weaknesses. 5 Current Data Splits Only Partially Probe Generalizability It is standard best practice in machine learning to divide data into disjoint training, development, and test sets. Otherwise, evaluation on the test set will not accurately measure how well a model generalizes to new examples. The standard splits of GeoQuery, ATIS, and Scholar treat each pair of a natural language question and its SQL query as a single item. Thus, as long as each question-query pair appears in only one set, the test set is not tainted with training data. We call this a questionbased data split. However, many English questions may correspond to the same SQL query. If at least one copy of every SQL query appears in training, then the task evaluated is classification, not true semantic parsing, of the English questions. We can increase the number of distinct SQL queries by varying what entities our questions ask about; the queries for what states border Texas and what states border Massachusetts are not identical. Adding this variation changes the task from pure classification to classification plus slot-filling. Does this provide a true evaluation of the trained model’s performance on unseen inputs? It depends on what we wish to evaluate. If we want a system that answers questions within a particular domain, and we have a dataset that we are confident covers everything a user might want to know about that domain, then evaluating on the traditional question-based split tells us whether the system is robust to variation in how a request is expressed. But compositionality is an essential part of language, and a system that has trained on What courses does Professor Smith teach? and What courses meet on Fridays? should be prepared for What courses that Professor Smith teaches meet on Fridays? Evaluation on the question split does not tell us about a model’s generalizable knowledge of SQL, or even its generalizable knowledge within the present domain. To evaluate the latter, we propose a complementary new division, where no SQL query is allowed to appear in more than one set; we call this the query split. To generate a query split, we substitute variables for entities in each query in the dataset, as described in § 3.2. Queries that are identical when thus anonymized are treated as a single query and randomly assigned—with all their accompanying questions—to train, dev, or test. We include the original question split and the new query split labeling for the new Advising dataset, as well as ATIS, GeoQuery, and Scholar. For the much smaller Academic, IMDB, Restaurant, and Yelp datasets, we include question- and query- based buckets for cross validation. 5.1 Systems Recently, a great deal of work has used variations on the seq2seq model. We compare performance of a basic seq2seq model (Sutskever et al., 2014), and seq2seq with attention over the input (Bahdanau et al., 2015), implemented with TensorFlow seq2seq (Britz et al., 2017). We also extend that model to include an attention-based copying option, similar to Jia and Liang (2016). Our output vocabulary for the decoder includes a special token, COPY. If COPY has the highest probability at step t, we replace it with the input token with the 356 Flight from Denver to Boston O O city0 O city1 Query Type 42 Figure 3: Baseline: blue boxes are LSTM cells and the black box is a feed-forward network. Outputs are the query template to use (right) and which tokens to fill it with (left). max of the normalized attention scores. Our loss function is the sum of two terms: first, the categorical cross entropy for the model’s probability distribution over the output vocabulary tokens; and second, the loss for word copying. When the correct output token is COPY, the second loss term is the categorical cross entropy of the distribution of attention scores at time t. Otherwise it is zero. For comparison, we include systems from two recent papers. Dong and Lapata (2016) used an attention-based seq2tree model for semantic parsing of logical forms; we apply their code here to SQL datasets. Iyer et al. (2017) use a seq2seq model with automatic dataset expansion through paraphrasing and SQL templates.2 We could not find publicly available code for the non-neural text-to-SQL systems discussed in Section 2. Also, most of those approaches require development of specialized grammars or templates for each new dataset they are applied to, so we do not compare such systems. 5.2 New Template Baseline In addition to the seq2seq models, we develop a new baseline system for text-to-SQL parsing which exploits repetitiveness in data. First, we automatically generate SQL templates from the training set. The system then makes two predictions: (1) which template to use, and (2) which words in the sentence should fill slots in the template. This system is not able to generalize beyond the queries in the training set, so it will fail completely on the new query-split data setting. Fig. 3 presents the overall architecture, which we implemented in DyNet (Neubig et al., 2017). A 2 We enable Iyer et al. (2017)’s paraphrasing data augmentation, but not their template-based augmentation because templates do not exist for most of the datasets (though they also found it did not significantly improve performance). Note, on ATIS and Geo their evaluation assumed no ambiguity in entity identification, which is equivalent to our Oracle Entities condition (§5.3). bidirectional LSTM provides a prediction for each word, either O if the word is not used in the final query, or a symbol such as city1 to indicate that it fills a slot. The hidden states of the LSTM at each end of the sentence are passed through a small feed-forward network to determine the SQL template to use. This architecture is simple and enables a joint choice of the tags and the template, though we do not explicitly enforce agreement. To train the model, we automatically construct a set of templates and slots. Slots are determined based on the variables in the dataset, with each SQL variable that is explicitly given in the question becoming a slot. We can construct these templates because our new version of the data explicitly defines all variables, their values, and where they appear in both question and query. For completeness, we also report on an oracle version of the template-based system (performance if it always chose the correct template from the train set and filled all slots correctly). 5.3 Oracle Entity Condition Some systems, such as Dong and Lapata’s model, are explicitly designed to work on anonymized data (i.e., data where entity names are replaced with a variable indicating their type). Others, such as attention-based copying models, treat identification of entities as an inextricable component of the text-to-SQL task. We therefore describe results on both the actual datasets with entities in place and a version anonymized using the variables described in § 3.2. We refer to the latter as the oracle entity condition. 5.4 Results and Analysis We hypothesized that even a system unable to generalize can achieve good performance on questionbased splits of datasets, and the results in Table 3 substantiate that for the NLP community’s datasets. The template-based, slot-filling baseline was competitive with state-of-the-art systems for question split on the four datasets from the NLP community. The template-based oracle performance indicates that for these datasets anywhere from 70-100% accuracy on question-based split could be obtained by selecting a template from the training set and filling in the right slots. For the three datasets developed by the databases community, the effect of question-query split is far less pronounced. The small sizes of these datasets cannot account for the difference, 357 Advising ATIS GeoQuery Restaurants Scholar Academic IMDB Yelp Model ? Q ? Q ? Q ? Q ? Q ? Q ? Q ? Q No Variable Anonymization Baseline 80 0 46 0 57 0 95 0 52 0 0 0 0 0 1 0 seq2seq 6 0 8 0 27 7 47 0 19 0 6 7 1 0 0 0 + Attention 29 0 46 18 63 21 100 2 33 0 71 64 7 3 2 2 + Copying 70 0 51 32 71 20 100 4 59 5 81 74 26 9 12 4 D&L seq2tree 46 2 46 23 62 31 100 11 44 6 63 54 6 2 1 2 Iyer et al. 41 1 45 17 66 40 100 8 44 3 76 70 10 4 6 6 With Oracle Entities Baseline 89 0 56 0 56 0 95 0 66 0 0 0 7 0 8 0 seq2seq 21 0 14 0 49 14 71 6 23 0 10 9 6 0 12 9 + Attention 88 0 57 23 73 31 100 32 71 4 77 74 44 17 33 28 D&L seq2tree 88 8 56 34 68 23 100 21 68 6 65 61 36 10 26 23 Iyer et al. 88 6 58 32 71 49 100 33 71 1 77 75 52 24 44 32 Baseline-Oracle 100 0 69 0 78 0 100 0 84 0 11 0 47 0 25 0 Table 3: Accuracy of neural text-to-SQL systems on English question splits (‘?’ columns) and SQL query splits (‘Q’ columns). The vertical line separates datasets from the NLP (left) and DB (right) communities. Results for Iyer et al. (2017) are slightly lower here than in the original paper because we evaluate on SQL output, not the database response. since even the oracle baseline did not have much success on these question splits, and since the baseline was able to handle the small Restaurants dataset. Looking back at Section 4, however, we see that these are the datasets with the least redundancy in Table 2. Because their question:uniquequery ratios are nearly 1:1, the question splits and query splits of these datasets were quite similar. Reducing redundancy does not improve performance on query split, though; at most, it reduces the difference between performance on the two splits. IMDB and Yelp both show weak results on query split despite their low redundancy. Experiments on a non-redundant version of query split for Advising, ATIS, GeoQuery, and Restaurant that contained only one question for each query confirmed this: in each case, accuracy remained the same or declined relative to regular query split. Having ruled out redundancy as a cause for the exceptional performance on Academic’s query split, we suspect the simplicity of its questions and the compositionality of its queries may be responsible. Every question in the dataset begins return me followed by a phrase indicating the desired field, optionally followed by one or more constraints; for instance, return me the papers by ‘author name0’ and return me the papers by ‘author name0’ on journal name0. None of this, of course, is to suggest that question-based split is an easy problem, even on the NLP community’s datasets. Except for the Advising and Restaurants datasets, even the oracle version of the template-based system is far from perfect. Access to oracle entities helps performance of non-copying systems substantially, as we would expect. Entity matching is thus a nontrivial component of the task. But the query-based split is certainly more difficult than the question-based split. Across datasets and systems, performance suffered on query split. Access to oracle entities did not remove this effect. Many of the seq2seq models do show some ability to generalize, though. Unlike the templatebased baseline, many were able to eek out some performance on query split. On question split, ATIS is the most difficult of the NLP datasets, yet on query split, it is among the easiest. To understand this apparent contradiction, we must consider what kinds of mistakes systems make and the contexts in which they appear. We therefore analyze the output of the attentionbased-copying model in greater detail. We categorize each output as shown in column one of Table 4. The “Correct” category is selfexplanatory. “Entity problem only” means that the query would have been correct but for a mistake in one or more entity names. “Different template” means that the system output was the same as another query from the dataset but for the entity names; however, it did not match the correct query for this question. “No template match” contains both the most mundane and the most interesting errors. Here, the system output a query that is not copied from training data. Sometimes, this is a simple error, such as inserting an extra comma in the WHERE clause. Other times, it is recombining 358 Advising ATIS GeoQuery Scholar Question Query Question Query Question Query Question Query Correct Count 369 5 227 111 191 56 129 17 µ Length 83.8 165.8 55.1 69.2 19.6 21.5 38.0 30.2 Entity Count 10 0 1 6 5 0 5 0 problem µ Length 111.8 N/A 28.0 71.3 17.2 N/A 42.6 N/A Different Count 43 675 94 68 53 84 40 94 template µ Length 69.8 68.4 85.8 72.1 25.6 18.0 43.9 39.8 No template Count 79 25 122 162 30 42 44 204 match µ Length 88.8 90.5 113.8 92.2 29.7 25.0 42.1 41.6 Table 4: Types of errors by the attention-based copying model for question and query splits, with (Count)s of queries in each category, and the (µ Length) of gold queries in the category. segments of queries it has seen into new queries. This is necessary but not sufficient model behavior in order to do well on the query split. In at least one case, this category includes a semantically equivalent query marked as incorrect by the exact-match accuracy metric.3 Table 4 shows the number of examples from the test set that fell into each category, as well as the mean length of gold queries (“length”) for each category. Short queries are easier than long ones in the question-based condition. In most cases, length in “correct” is shorter than length in either “different template” or “no template match” categories. In addition, for short queries, the model seems to prefer to copy a query it has seen before; for longer ones, it generates a new query. In every case but one, mean length in “different template” is less than in “No template match.” Interestingly, in ATIS and GeoQuery, where the model performs tolerably well on query split, the length for correct queries in query split is higher than the length for correct queries from the question split. Since, as noted above, recombination of template pieces (as we see in “no template match”) is a necessary step for success on query split, it may be that longer queries have a higher probability of recombination, and therefore a better chance of being correct in query split. The data from Scholar does not support this position; however, note that only 17 queries were correct in Scholar query split, suggesting caution in making generalizations from this set. These results also seem to indicate that our copying mechanism effectively deals with entity identification. Across all datasets, we see only 3For the question which of the states bordering pennsylvania has the largest population, the gold query ranked the options by population and kept the top result, while the system output used a subquery to find the max population then selected states that had that population. a small number of entity-problem-only examples. However, comparing the rows from Table 3 for seq2seq+Copy at the top and seq2seq+Attention in the oracle entities condition, it is clear that having oracle entities provides a useful signal, with consistent gains in performance. Takeaways: Evaluate on both question-based and query-based dataset splits. Additionally, variable anonymization noticeably decreases the difficulty of the task; thus, thorough evaluations should include results on datasets without anonymization. 5.5 Logic Variants To see if our observations on query and question split performance apply beyond SQL, we also considered the logical form annotations for ATIS and GeoQuery (Zettlemoyer and Collins, 2005, 2007). We retrained Jia and Liang (2016)’s baseline and full system. Interestingly, we founnd limited impact on performance, measured with either logical forms or denotations. To understand why, we inspected the logical form datasets. In both ATIS and GeoQuery, the logical form version has a larger set of queries after variable identification. This seems to be because the logic abstracts away from the surface form less than SQL does. For example, these questions have the same SQL in our data, but different logical forms: what state has the largest capital (A, (state(A), loc(B, A), largest(B, capital(B)))) which state ’s capital city is the largest (A, largest(B, (state(A), capital(A, B), city(B)))) SELECT CITYalias0.STATE NAME FROM CITY AS CITYalias0 WHERE CITYalias0.POPULATION = ( SELECT MAX( CITYalias1.POPULATION ) FROM CITY AS CITYalias1 , STATE AS STATEalias0 WHERE STATEalias0.CAPITAL = CITYalias1.CITY NAME ) ; Other examples include variation in the logical form between sentences with largest and largest 359 population even though the associated dataset only has population figures for cities (not area or any other measure of size). Similarly in ATIS, the logical form will add (flight $0) if the question mentions flights explicitly, making these two queries different, even though they convey the same user intent: what flights do you have from bwi to sfo i need a reservation from bwi to sfo By being closer to a syntactic representation, the queries end up being more compositional, which encourages the model to learn more compositionality than the SQL models do. 6 Conclusion In this work, we identify two issues in current datasets for mapping questions to SQL queries. First, by analyzing question and query complexity we find that human-written datasets require properties that have not yet been included in large-scale automatically generated query sets. Second, we show that the generalizability of systems is overstated by the traditional data splits. In the process we also identify and fix hundreds of mistakes across multiple datasets and homogenize the SQL query structures to enable effective multi-domain experiments. Our analysis has clear implications for future work. Evaluating on multiple datasets is necessary to ensure coverage of the types of questions humans generate. Developers of future large-scale datasets should incorporate joins and nesting to create more human-like data. And new systems should be evaluated on both question- and querybased splits, guiding the development of truly general systems for mapping natural language to structured database queries. Acknowledgments We would like to thank Laura Wendlandt, Walter Lasecki, and Will Radford for comments on an earlier draft and the anonymous reviewers for their helpful suggestions. This material is based in part upon work supported by IBM under contract 4915012629. Any opinions, findings, conclusions or recommendations expressed above are those of the authors and do not necessarily reflect the views of IBM. References I. Androutsopoulos, G. D. Ritchie, and P. Thanisch. 1995. Natural Language Interfaces to Databases An Introduction. Natural Language Engineering, 1(709):29–81. Dzmitry Bahdanau, Kyunghyun Cho, and Yoshua Bengio. 2015. Neural machine translation by jointly learning to align and translate. In Proceedings of the ICLR, pages 1–15, San Diego, California. Islam Beltagy, Katrin Erk, and Raymond Mooney. 2014. Semantic parsing using distributional semantics and probabilistic logic. Proceedings of the ACL 2014 Workshop on Semantic Parsing, pages 7–11. Jonathan Berant and Percy Liang. 2014. Semantic parsing via paraphrasing. In Proceedings of the 52nd Annual Meeting of the Association for Computational Linguistics, pages 1415–1425. Denny Britz, Anna Goldie, Minh-thang Luong, and Quoc Le. 2017. Massive exploration of neural machine translation architectures. ArXiv e-prints. Ruichu Cai, Boyan Xu, Xiaoyan Yang, Zhenjie Zhang, and Zijian Li. 2017. An encoder-decoder framework translating natural language to database queries. ArXiv e-prints. Deborah A. Dahl, Madeleine Bates, Michael Brown, William Fisher, Kate Hunicke-Smith, David Pallett, Christine Pao, Alexander Rudnicky, and Elizabeth Shriber. 1994. Expanding the scope of the ATIS task: The ATIS-3 corpus. Proceedings of the workshop on Human Language Technology, pages 43–48. Li Dong and Mirella Lapata. 2016. Language to logical form with neural attention. Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics, 1:33–43. Alessandra Giordani and Alessandro Moschitti. 2012. Translating questions to SQL queries with generative parsers discriminatively reranked. In COLING 2012, pages 401–410. Srinivasan Iyer, Ioannis Konstas, Alvin Cheung, Jayant Krishnamurthy, and Luke Zettlemoyer. 2017. Learning a neural semantic parser from user feedback. In Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 963—-973, Vancouver, Canada. Robin Jia and Percy Liang. 2016. Data recombination for neural semantic parsing. In Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 12–22. Youxuan Jiang, Jonathan K. Kummerfeld, and Walter S. Lasecki. 2017. Understanding Task Design Trade-offs in Crowdsourced Paraphrase Collection. 360 In Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (Volume 2: Short Papers), pages 103–109, Vancouver, Canada. Fei Li and H. V. Jagadish. 2014. Constructing an interactive natural language interface for relational databases. In Proceedings of the VLDB Endowment, pages 73–84. Percy Liang, Michael I Jordan, and Dan Klein. 2011. Learning dependency-based compositional semantics. In Proceedings of the 49th Annual Meeting of the Association for Computational Linguistics: Human Language Technologies - Volume 1, pages 590– 599, Portland, Oregon. Arvind Neelakantan, Quoc V Le, Mart´ın Abadi, Andrew McCallum, and Dario Amodei. 2017. Learning a natural language interface with neural programmer. Proceedings of the ICLR, pages 1–10. Graham Neubig, Chris Dyer, Yoav Goldberg, Austin Matthews, Waleed Ammar, Antonios Anastasopoulos, Miguel Ballesteros, David Chiang, Daniel Clothiaux, Trevor Cohn, Kevin Duh, Manaal Faruqui, Cynthia Gan, Dan Garrette, Yangfeng Ji, Lingpeng Kong, Adhiguna Kuncoro, Gaurav Kumar, Chaitanya Malaviya, Paul Michel, Yusuke Oda, Matthew Richardson, Naomi Saphra, Swabha Swayamdipta, and Pengcheng Yin. 2017. Dynet: The dynamic neural network toolkit. arXiv preprint arXiv:1701.03980. Rodolfo A. Pazos Rangel, Juan Javier Gonz´alez Barbosa, Marco Antonio Aguirre Lam, Jos´e Antonio Mart´ınez Flores, and H´ector J. Fraire Huacuja. 2013. Natural language interfaces to databases: An analysis of the state of the art. Springer Berlin Heidelberg, Berlin, Heidelberg. Hoifung Poon. 2013. Grounded unsupervised semantic parsing. In Proceedings of the 51st Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 933–943. Ana-Maria Popescu, Alex Armanasu, Oren Etzioni, David Ko, and Alexander Yates. 2004. Modern natural language interfaces to databases: composing statistical parsing with semantic tractability. In Proceedings of the 20th International Conference on Computational Linguistics, pages 141–147. Ana-Maria Popescu, Oren Etzioni, and Henry Kautz. 2003. Towards a theory of natural language interfaces to databases. Proceedings of the 8th International Conference on Intelligent User Interfaces IUI 03, pages 149–157. Patti J. Price. 1990. Evaluation of spoken language systems: The ATIS domain. Proc. of the Speech and Natural Language Workshop, pages 91–95. Ilya Sutskever, Oriol Vinyals, and Quoc V Le. 2014. Sequence to sequence learning with neural networks. Advances in Neural Information Processing Systems (NIPS), pages 3104–3112. Lappoon R. Tang and Raymond J. Mooney. 2000. Automated Construction of Database Interfaces: Integrating Statistical and Relational Learning for Semantic Parsing. Proceedings of the Joint SIGDAT Conference on Emprical Methods in Natural Language Processing and Very Large Corpora, pages 133–141. Navid Yaghmazadeh, Yuepeng Wang, Isil Dillig, and Thomas Dillig. 2017. Type- and content-driven synthesis of SQL queries from natural language. ArXiv e-prints. Wen-tau Yih, Ming-Wei Chang, Xiaodong He, and Jianfeng Gao. 2015. Semantic parsing via staged query graph generation: Question answering with knowledge base. In Proceedings of the 53rd Annual Meeting of the Association for Computational Linguistics and the 7th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), pages 1321–1331, Beijing, China. Pengcheng Yin, Zhengdong Lu, Hang Li, and Ben Kao. 2016. Neural Enquirer: Learning to query tables in natural language. Proceedings of the Twenty-Fifth International Joint Conference on Artificial Intelligence (IJCAI-16), pages 2308–2314. John M. Zelle and Raymond J. Mooney. 1996. Learning to Parse Database queries using inductive logic proramming. Learning, pages 1050–1055. Luke Zettlemoyer and Michael Collins. 2005. Learning to Map Sentences to Logical Form : Structured Classification with Probabilistic Categorial Grammars. 21st Conference on Uncertainty in Artificial Intelligence, pages 658–666. Luke Zettlemoyer and Michael Collins. 2007. Online learning of relaxed CCG grammars for parsing to logical form. In Proceedings of the 2007 Joint Conference on Empirical Methods in Natural Language Processing and Computational Natural Language Learning (EMNLP-CoNLL), pages 678–687, Prague, Czech Republic. Victor Zhong, Caiming Xiong, and Richard Socher. 2017. Seq2SQL: Generating Structured Queries from Natural Language using Reinforcement Learning. ArXiv e-prints, pages 1–12.
2018
33
Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Long Papers), pages 361–372 Melbourne, Australia, July 15 - 20, 2018. c⃝2018 Association for Computational Linguistics 361 Semantic Parsing with Syntax- and Table-Aware SQL Generation Yibo Sun§∗, Duyu Tang‡, Nan Duan‡, Jianshu Ji♮, Guihong Cao♮, Xiaocheng Feng§, Bing Qin§, Ting Liu§, Ming Zhou‡ §Harbin Institute of Technology, Harbin, China ‡Microsoft Research Asia, Beijing, China ♮Microsoft AI and Research, Redmond WA, USA {ybsun,xcfeng,qinb,tliu}@ir.hit.edu.cn {dutang,nanduan,jianshuj,gucao,mingzhou}@microsoft.com Abstract We present a generative model to map natural language questions into SQL queries. Existing neural network based approaches typically generate a SQL query wordby-word, however, a large portion of the generated results is incorrect or not executable due to the mismatch between question words and table contents. Our approach addresses this problem by considering the structure of table and the syntax of SQL language. The quality of the generated SQL query is significantly improved through (1) learning to replicate content from column names, cells or SQL keywords; and (2) improving the generation of WHERE clause by leveraging the column-cell relation. Experiments are conducted on WikiSQL, a recently released dataset with the largest questionSQL pairs. Our approach significantly improves the state-of-the-art execution accuracy from 69.0% to 74.4%. 1 Introduction We focus on semantic parsing that maps natural language utterances to executable programs (Zelle and Mooney, 1996; Wong and Mooney, 2007; Zettlemoyer and Collins, 2007; Kwiatkowski et al., 2011; Pasupat and Liang, 2015; Iyer et al., 2017; Iyyer et al., 2017). In this work, we regard SQL as the programming language, which could be executed on a table or a relational database to obtain an outcome. Datasets are the main driver of progress for statistical approaches in semantic parsing (Liang, 2016). Recently, Zhong ∗Work is done during internship at Microsoft Research Asia. et al. (2017) release WikiSQL, the largest handannotated semantic parsing dataset which is an order of magnitude larger than other datasets in terms of both the number of logical forms and the number of tables. Pointer network (Vinyals et al., 2015) based approach is developed, which generates a SQL query word-by-word through replicating from a word sequence consisting of question words, column names and SQL keywords. However, a large portion of generated results is incorrect or not executable due to the mismatch between question words and column names (or cells). This also reflects the real scenario where users do not always use exactly the same column name or cell content to express the question. To address the aforementioned problem, we present a generative semantic parser that considers the structure of table and the syntax of SQL language. The approach is partly inspired by the success of structure/grammar driven neural network approaches in semantic parsing (Xiao et al., 2016; Krishnamurthy et al., 2017). Our approach is based on pointer networks, which encodes the question into continuous vectors, and synthesizes the SQL query with three channels. The model learns when to generate a column name, a cell or a SQL keyword. We further incorporate columncell relation to mitigate the ill-formed outcomes. We conduct experiments on WikiSQL. Results show that our approach outperforms existing systems, improving state-of-the-art execution accuracy to 74.4% and logical form accuracy to 60.7%. An extensive analysis reveals the advantages and limitations of our approach. 2 Task Formulation and Dataset As shown in Figure 1, we focus on sequence-toSQL generation in this work. Formally, the task takes a question q and a table t consisting of n col362 Question: what 's the total number of songs originally performed by anna nalick ? Sequence-to-SQL Generation 𝑆𝐸𝐿𝐸𝐶𝑇𝐶𝑂𝑈𝑁𝑇𝑆𝑜𝑛𝑔𝑐ℎ𝑜𝑖𝑐𝑒𝑊𝐻𝐸𝑅𝐸𝑂𝑟𝑖𝑔𝑖𝑛𝑎𝑙𝑎𝑟𝑡𝑖𝑠𝑡= 𝑎𝑛𝑛𝑎𝑐ℎ𝑟𝑖𝑠𝑡𝑖𝑛𝑒𝑛𝑎𝑙𝑖𝑐𝑘 SQL: SELECT aggregator SELECT column WHERE column WHERE operator WHERE value 1 Answer Execution Episode Song choice Original artist Top 80 I Try Macy Gray Top 40 Breathe (2 AM) Anna Christine Nalick Top 22 Put Your Records On Corinne Bailey Rae Top 18 Sweet Ones Sarah Slean Top 10 Inside and Out Bee Gees Table SELECT clause WHERE clause Figure 1: An brief illustration of the task. The focus of this work is sequence-to-SQL generation. umn names and n × m cells as the input, and outputs a SQL query y. We do not consider the join operation over multiple relational tables, which we leave in the future work. We use WikiSQL (Zhong et al., 2017), the largest hand-annotated semantic parsing dataset to date which consists of 87,726 questions and SQL queries distributed across 26,375 tables from Wikipedia. 3 Related Work Semantic Parsing. Semantic parsing aims to map natural language utterances to programs (e.g., logical forms), which will be executed to obtain the answer (denotation) (Zettlemoyer and Collins, 2005; Liang et al., 2011; Berant et al., 2013; Poon, 2013; Krishnamurthy and Kollar, 2013; Pasupat and Liang, 2016; Sun et al., 2016; Jia and Liang, 2016; Koˇcisk´y et al., 2016; Lin et al., 2017). Existing studies differ from (1) the form of the knowledge base, e.g. facts from Freebase, a table (or relational database), an image (Suhr et al., 2017; Johnson et al., 2017; Hu et al., 2017; Goldman et al., 2017) or a world state (Long et al., 2016); (2) the programming language, e.g. first-order logic, lambda calculus, lambda DCS, SQL, parameterized neural programmer (Yin et al., 2015; Neelakantan et al., 2016), or coupled distributed and symbolic executors (Mou et al., 2017); (3) the supervision used for learning the semantic parser, e.g. question-denotation pairs, binary correct/incorrect feedback (Artzi and Zettlemoyer, 2013), or richer supervision of question-logical form pairs (Dong and Lapata, 2016). In this work, we study semantic parsing over tables, which is critical for users to access relational databases with natural language, and could serve users’ information need for structured data on the web. We use SQL as the programming language, which has a broad acceptance to programmers. Natural Language Interface for Databases. Our work relates to the area of accessing database with natural language interface (Dahl et al., 1994; Brad et al., 2017). Popescu et al. (2003) use a parser to parse the question, and then use lexicon matching between question and the table column names/cells. Giordani and Moschitti (2012) parse the question with dependency parser, compose candidate SQL queries with heuristic rules, and use kernel based SVM ranker to rank the results. Li and Jagadish (2014) translate natural language utterances into SQL queries based on dependency parsing results, and interact with users to ensure the correctness of the interpretation process. Yaghmazadeh et al. (2017) build a semantic parser on the top of SEMPRE (Pasupat and Liang, 2015) to get a SQL sketch, which only has the SQL shape and will be subsequently completed based on the table content. Iyer et al. (2017) maps utterances to SQL queries through sequence-tosequence learning. User feedbacks are incorporated to reduce the number of queries to be labeled. Zhong et al. (2017) develop an augmented pointer network, which is further improved with reinforcement learning for SQL sequence prediction. Xu et al. (2017) adopts a sequence-to-set model to predict WHERE columns, and uses an attentional model to predict the slots in where clause. Different from (Iyer et al., 2017; Zhong et al., 2017), our approach leverages SQL syntax and table structure. Compared to (Popescu et al., 2003; Giordani and Moschitti, 2012; Yaghmazadeh et al., 2017), our approach is end-to-end learning and independent of a syntactic parser or manu363 ally designed templates. We are aware of existing studies that combine reinforcement learning and maximum likelihood estimation (MLE) (Guu et al., 2017; Mou et al., 2017; Liang et al., 2017). However, the focus of this work is the design of the neural architecture, despite we also implement an RL strategy (refer to §4.4). Structure/Grammar Guided Neural Decoder Our approach could be viewed as an extension of the sequence-to-sequence learning (Sutskever et al., 2014; Bahdanau et al., 2015) with a tailored neural decoder driven by the characteristic of the target language (Yin and Neubig, 2017; Rabinovich et al., 2017). Methods with similar intuitions have been developed for language modeling (Dyer et al., 2016), neural machine translation (Wu et al., 2017) and lambda calculus based semantic parsing (Dong and Lapata, 2016; Krishnamurthy et al., 2017). The difference is that our model is developed for sequence-to-SQL generation, in which table structure and SQL syntax are considered. 4 Methodology We first describe the background on pointer networks, and then present our approach that considers the table structure and the SQL syntax. 4.1 Background: Pointer Networks Pointer networks is originally introduced by (Vinyals et al., 2015), which takes a sequence of elements as the input and outputs a sequence of discrete tokens corresponding to positions in the input sequence. The approach has been successfully applied in reading comprehension (Kadlec et al., 2016) for pointing to the positions of answer span from the document, and also in sequenceto-sequence based machine translation (Gulcehre et al., 2016) and text summarization (Gu et al., 2016) for replicating rare words from the source sequence to the target sequence. The approach of Zhong et al. (2017) is based on pointer networks. The encoder is a recurrent neural network (RNN) with gated recurrent unit (GRU) (Cho et al., 2014), whose input is the concatenation of question words, words from column names and SQL keywords. The decoder is another GRU based RNN, which works in a sequential way and generates a word at each time step. The generation of a word is actually selectively replicating a word from the input sequence, the probability distribution of which is calculated with an attention mechanism (Bahdanau et al., 2015). The probability of generating the i-th word xi in the input sequence at the t-th time step is calculated as Equation 1, where hdec t is the decoder hidden state at the t-th time step, henc i is the encoder hidden state of the word xi, Wa is the model parameter. p(yt = xi|y<t, x) ∝exp(Wa[hdec t ; henc i ]) (1) It is worth to note that if a column name consists of multiple words (such as “original artist” in Figure 1), these words are separated in the input sequence. The approach has no guarantee that a multi-word column name could be successively generated, which would affect the executability of the generated SQL query. 4.2 STAMP: Syntax- and Table- Aware seMantic Parser Figure 2 illustrates an overview of the proposed model, which is abbreviated as STAMP. Different from Zhong et al. (2017), the word is not the basic unit to be generated in STAMP. As is shown, there are three “channels” in STAMP, among which the column channel predicts a column name, the value channel predicts a table cell and the SQL channel predicts a SQL keyword. Accordingly, the probability of generating a target token is formulated in Equation 2, where zt stands for the channel selected by the switching gate, pz(·) is the probability to choose a channel, and pw(·) is similar to Equation 1 which is a probability distribution over the tokens from one of the three channels. p(yt|y<t, x) = X zt pw(yt|zt, y<t, x)pz(zt|y<t, x) (2) One advantage of this architecture is that it inherently addresses the problem of generating partial column name/cell because an entire column name/cell is the basic unit to be generated. Another advantage is that the column-cell relation and question-cell connection can be naturally integrated in the model, which will be described below. Specifically, our encoder takes a question as the input. Bidirectional RNN with GRU unit is applied to the question, and the concatenation of both ends is used as the initial state of the decoder. Another bidirectional RNN is used to compute the representation of a column name (or a cell), in case that each unit contains multiple words (Dong 364 Table Question SQL 𝑆𝐸𝐿𝐸𝐶𝑇, 𝑊𝐻𝐸𝑅𝐸, 𝐶𝑂𝑈𝑁𝑇, 𝑀𝐼𝑁, 𝑀𝐴𝑋, 𝐴𝑁𝐷, >, <, =. Pick # CFL Team Player Position College 27 Hamilton Tiger-Cats Connor Healy DB Wilfrid Laurier 28 Calgary Stampeders Anthony Forgone OL York 29 Toronto Argonauts Frank Hoffman DL York linking Decoder Table Structure SQL Syntax Encoder <𝑆> 𝑆𝐸𝐿𝐸𝐶𝑇 𝐶𝑂𝑈𝑁𝑇 𝐶𝐹𝐿𝑇𝑒𝑎𝑚 𝑊𝐻𝐸𝑅𝐸 𝐶𝑜𝑙𝑙𝑒𝑔𝑒 = “𝑌𝑜𝑟𝑘” column value SQL column SQL SQL SQL SQL SQL value column SQL value column column value 𝑆𝐸𝐿𝐸𝐶𝑇 𝑊𝐻𝐸𝑅𝐸 𝑀𝐼𝑁 𝐶𝑂𝑈𝑁𝑇 𝑀𝐴𝑋 𝐴𝑁𝐷 > < = SQL York Wilfrid Laurier York 𝒕= 𝟎 𝒕= 𝟐 𝒕= 𝟔 Figure 2: An illustration of the proposed approach. At each time step, a switching gate selects a channel to predict a column name (maybe composed of multiple words), a cell or a SQL keyword. The words in green below the SQL tokens stand for the results of the switching gate at each time step. et al., 2015). Essentially, each channel is an attentional neural network. For cell and SQL channels, the input of the attention module only contains the decoder hidden state and the representation of the token to be calculated as follows, psql w (i) ∝exp(Wsql[hdec t ; esql i ]) (3) where esql i stands for the representation of the ith SQL keyword. As suggested by (Zhong et al., 2017), we also concatenate the question representation into the input of the column channel in order to improve the accuracy of the SELECT column. We implement the switching gate with a feed-forward neural network, in which the output is a softmax function and the input is the decoder hidden state hdec t . 4.3 Improved with Column-Cell Relation We further improve the STAMP model by considering the column-cell relation, which is important for predicting the WHERE clause. On one hand, the column-cell relation could improve the prediction of SELECT column. We observe that a cell or a part of it typically appears at the question acting as the WHERE value, such as “anna nalick” for “anna christine nalick”). However, a column name might be represented with a totally different utterance, which is a “semantic gap”. Supposing the question is “How many schools did player number 3 play at?” and the SQL query is “Select count School Club Team where No. = 3”. We could see that the column names “School Club Team” and “No.” are different from their corresponding utterances “schools”, “number” in natural language question. Thus, table cells could be regarded as the pivot that connects the question and column names (the “linking” component in Figure 2). For instance, taking the question from Figure 2, the word “York” would help to predict the column name as “College” rather than “Player”. There might be different possible ways to implement this intuition. We use cell information to enhance the column name representation in this work. The vector of a column name is further concatenated with a question-aware cell vector, which is weighted averaged over the cell vectors belonging to the same column. The probability distribution in the column channel is calculated as Equation 4. We use the number of cell words occurring in the question to measure the importance of a cell, which is further normalized through a softmax function to yield the final weight αcell j ∈[0, 1]. An alternative measurement is to use an additional attention model whose input contains the question vector and the cell vector. We favor to the intuitive and efficient way in this work. pcol w (i) ∝exp(Wcol[hdec t ; hcol i ; X j∈coli αcell j hcell j ]) (4) On the other hand, the column-cell relation could improve the prediction of the WHERE val365 ue. To yield an executable SQL, the WHERE value should be a cell that belongs to the same WHERE column1. Taking Figure 2 as an example, it should be avoided to predict a where clause like “Player = York” because the cell “York” does not belong to the column name “Player”. To achieve this, we incorporate a global variable to memorize the last predicted column name. When the switching gate selects the value channel, the cell distribution is only calculated over the cells belonging to the last predicted column name. Furthermore, we incorporate an additional probability distribution over cells based on the aforementioned word co-occurrence between the question and cells, and weighted average two cell distributions, which is calculated as follows. pcell w (j) = λˆpcell w (j) + (1 −λ)αcell j (5) where ˆpcell w (j) is the standard probability distribution obtained from the attentional neural network, and λ is a hyper parameter which is tuned on the dev set. 4.4 Improved with Policy Gradient The model described so far could be conventionally learned via cross-entropy loss over questionSQL pairs. However, different SQL queries might be executed to yield the same result, and possible SQL queries of different variations could not be exhaustively covered in the training dataset. Two possible ways to handle this are (1) shuffling the WHERE clause to generate more SQL queries, and (2) using reinforcement learning (RL) which regards the correctness of the executed output as the goodness (reward) of the generated SQL query. We follow Zhong et al. (2017) and adopt a policy gradient based approach. We use a baseline strategy (Zaremba and Sutskever, 2015) to decrease the learning variance. The expected reward (Williams, 1992) for an instance is calculated as E(yg) = Pk j=1 logp(yj)R(yj, yg), where yg is the ground truth SQL query, yj is a generated SQL query, p(yj) is the probability of yj being generated by our model, and k is the number of sampled SQL queries. R(yj, yg) is the same reward function defined by Zhong et al. (2017), which is +1 if yj is executed to yield the correct answer; −1 if 1This constraint is suitable in this work as we do not consider the nested query in the where clause, such as “where College = select College from table”, which is also the case not included in the WikiSQL dataset. We leave generating nested SQL query in the future work. yj is a valid SQL query and is executed to get an incorrect answer; and −2 if yj is not a valid SQL query. In this way, model parameters could be updated with policy gradient over question-answer pairs. 4.5 Training and Inference As the WikiSQL data contains rich supervision of question-SQL pairs, we use them to train model parameters. The model has two cross-entropy loss functions, as given below. One is for the switching gate classifier (pz) and another is for the attentional probability distribution of a channel (pw). l = − X t logpz(zt|y<t, x)− X t logpw(yt|zt, y<t, x) (6) Our parameter setting strictly follows Zhong et al. (2017). We represent each word using word embedding2 (Pennington et al., 2014) and the mean of the sub-word embeddings of all the n-grams in the word (Hashimoto et al., 2016)3. The dimension of the concatenated word embedding is 400. We clamp the embedding values to avoid over-fitting. We set the dimension of encoder and decoder hidden state as 200. During training, we randomize model parameters from a uniform distribution with fan-in and fan-out, set batch size as 64, set the learning rate of SGD as 0.5, and update the model with stochastic gradient descent. Greedy search is used in the inference process. We use the model trained from question-SQL pairs as initialization and use RL strategy to fine-tune the model. SQL queries used for training RL are sampled based on the probability distribution of the model learned from question-SQL pairs. We tune the best model on the dev set and do inference on the test set for only once. This protocol is used in model comparison as well as in ablations. 5 Experiment We conduct experiments on the WikiSQL dataset4, which includes 61, 297/9, 145/17, 284 examples in the training/dev/test sets. Each instance consists of a question, a table, a SQL query and a result. Following Zhong et al. (2017), we use two 2http://nlp.stanford.edu/data/glove. 840B.300d.zip 3http://www.logos.t.u-tokyo.ac.jp/ ˜hassy/publications/arxiv2016jmt/jmt_ pre-trained_embeddings.tar.gz 4https://github.com/salesforce/WikiSQL 366 Methods Dev Test Acclf Accex Acclf Accex Attentional Seq2Seq 23.3% 37.0% 23.4% 35.9% Aug.PntNet (Zhong et al., 2017) 44.1% 53.8% 43.3% 53.3% Aug.PntNet (re-implemented by us) 51.5% 58.9% 52.1% 59.2% Seq2SQL (no RL) (Zhong et al., 2017) 48.2% 58.1% 47.4% 57.1% Seq2SQL (Zhong et al., 2017) 49.5% 60.8% 48.3% 59.4% SQLNet (Xu et al., 2017) – 69.8% – 68.0% Guo and Gao (2018) – 71.1% – 69.0% STAMP (w/o cell) 58.6% 67.8% 58.0% 67.4% STAMP (w/o column-cell relation) 59.3% 71.8% 58.4% 70.6% STAMP 61.5% 74.8% 60.7% 74.4% STAMP+RL 61.7% 75.1% 61.0% 74.6% Table 1: Performances of different approaches on the WikiSQL dataset. Two evaluation metrics are logical form accuracy (Acclf) and execution accuracy (Accex). Our model is abbreviated as (STAMP). evaluation metrics. One metric is logical form accuracy (Acclf), which measures the percentage of the generated SQL queries that have exact string match with the ground truth SQL queries. Since different SQL queries might obtain the same result, another metric is execution accuracy (Accex), which measures the percentage of the generated SQL queries that obtain the correct answer. 5.1 Model Comparisons After released, WikiSQL dataset has attracted a lot of attentions from both industry and research communities. Zhong et al. (2017) develop following methods, including (1) Aug.PntNet which is an end-to-end learning pointer network approach; (2) Seq2SQL (no RL), in which two separate classifiers are trained for SELECT aggregator and SELECT column, separately; and (3) Seq2SQL, in which reinforcement learning is further used for model training. Results of tattentional sequenceto-sequence learning baseline (Attentional Seq2Seq) are also reported in (Zhong et al., 2017). Xu et al. (2017) develop SQLNet, which predicts SELECT clause and WHERE clause separately. Sequence-to-set neural architecture and column attention are adopted to predict the WHERE clause. Similarly, Guo and Gao (2018) develop tailored modules to handle three components of SQL queries, respectively. A parallel work from (Yu et al., 2018) obtains higher execution accuracy (82.6%) on WikiSQL, however, its model is slotfilling based which is designed specifically for the “select-aggregator-where” type and utilizes external knowledge base (such as Freebase) to tag the question words. We believe this mechanism could improve our model as well, we leave this as a potential future work. Our model is abbreviated as (STAMP), which is short for Syntax- and Table- Aware seMantic Parser. The STAMP model in Table 1 stands for the model we describe in §4.2 plus §4.3. STAMP+RL is the model that is fine-tuned with the reinforcement learning strategy as described in §4.4. We implement a simplified version of our approach (w/o cell), in which WHERE values come from the question. Thus, this setting differs from Aug.PntNet in the generation of WHERE column. We also study the influence of the relation-cell relation (w/o column-cell relation) through removing the enhanced column vector, which is calculated by weighted averaging cell vectors. From Table 1, we can see that STAMP performs better than existing systems on WikiSQL. Incorporating RL strategy does not significantly improve the performance. Our simplified model, STAMP (w/o cell), achieves better accuracy than Aug.PntNet, which further reveals the effects of the column channel. Results also demonstrate the effects of incorporating the column-cell relation, removing which leads to about 4% performance drop in terms of Accex. 5.2 Model Analysis: Fine-Grained Accuracy We analyze the STAMP model from different perspectives in this part. Firstly, since SQL queries in WikiSQL consists of SELECT column, SELECT aggregator, and WHERE clause, we report the results with regard 367 Methods Dev Test Accsel Accagg Accwhere Accsel Accagg Accwhere Aug.PntNet (reimplemented by us) 80.9% 89.3% 62.1% 81.3% 89.7% 62.1% Seq2SQL (Zhong et al., 2017) 89.6% 90.0% 62.1% 88.9% 90.1% 60.2% SQLNet (Xu et al., 2017) 91.5% 90.1% 74.1% 90.9% 90.3% 71.9% Guo and Gao (2018) 92.5% 90.1% 74.7% 91.9% 90.3% 72.8% STAMP (w/o cell) 89.9% 89.2% 72.1% 89.2% 89.3% 71.2% STAMP (w/o column-cell relation) 89.3% 89.2% 73.2% 88.8% 89.2% 71.8% STAMP 89.4% 89.5% 77.1% 88.9% 89.7% 76.0% STAMP+RL 89.6% 89.7% 77.3% 90.0% 89.9% 76.3% Table 2: Fine-grained accuracies on the WikiSQL dev and test sets. Accuracy (Acclf) is evaluated on SELECT column (Accsel) , SELECT aggregator (Accagg), and WHERE clause (Accwhere), respectively. to more fine-grained evaluation metrics over these aspects. Results are given in Table 2, in which the numbers of Seq2SQL and SQLNet are reported in (Xu et al., 2017). We can see that the main improvement of STAMP comes from the WHERE clause, which is also the key challenge of the WikiSQL dataset. This is consistent with our primary intuition on improving the prediction of WHERE column and WHERE value. The accuracies of STAMP on SELECT column and SELECT aggregator are not as high as SQLNet. The main reason is that these two approaches train the SELECT clause separately while STAMP learns all these components in a unified paradigm. 5.3 Model Analysis: Difficulty Analysis We study the performance of STAMP on different portions of the test set according to the difficulties of examples. We compare between Aug.PntNet (re-implemented by us) and STAMP. In this work, the difficulty of an example is reflected by the number of WHERE columns. Method #where Dev Test Aug.PntNet = 1 63.4% 63.8% = 2 51.0% 51.8% ≥3 38.5% 38.1% STAMP = 1 80.9% 80.2% = 2 65.1% 65.4% ≥3 44.1% 48.2% Table 3: Execution accuracy (Accex) on different groups of WikiSQL dev and test sets. From Table 3, we can see that STAMP outperforms Aug.PntNet in all these groups. The accuracy decreases with the increase of the number of WHERE conditions. 5.4 Model Analysis: Executable Analysis We study the percentage of executable SQL queries in the generated results. As shown in Table 4, STAMP significantly outperforms Aug.PntNet. Almost all the results of STAMP are executable. This is because STAMP avoids generating incomplete column names or cells, and guarantees the correlation between WHERE conditions and WHERE values in the table. Dev Test Aug.PntNet 77.9% 78.7% STAMP 99.9% 99.9% Table 4: Percentage of the executable SQL queries on WikiSQL dev and test sets. 5.5 Model Analysis: Case Study We give a case study to illustrate the generated results by STAMP, with a comparison to Aug.PntNet. Results are given in Figure 3. In the first example, Aug.PntNet generates incomplete column name (“style”), which is addressed in STAMP through replicating an entire column name. In the second example, the WHERE value (“brazilian jiu-jitsu”) does not belong to the generated WHERE column (“Masters”) in Aug.PntNet. This problem is avoided in STAMP through incorporating the table content. 5.6 Error Analysis We conduct error analysis on the dev set of WikiSQL to show the limitation of the STAMP model and where is the room for making further improvements. We analyze the 2,302 examples which are executed to wrong answers by the STAMP model, and find that 33.6% of them have wrong SE368 Episode # Country City Martial Art/Style Masters Original Airdate 1.1 China Dengfeng Kung Fu ( Wushu ; Sanda ) Shi De Yang, Shi De Cheng 28-Dec-07 1.2 Philippines Manila Kali Leo T. Gaje Jr. Cristino Vasquez 4-Jan-08 1.3 Japan Tokyo Kyokushin Karate Yuzo Goda, Isamu Fukuda 11-Jan-08 1.4 Mexico Mexico City Boxing Ignacio "Nacho" Beristáin Tiburcio Garcia 18-Jan-08 1.5 Indonesia Bandung Pencak Silat Rita Suwanda Dadang Gunawan 25-Jan-08 1.7 South Korea Seoul Hapkido Kim Nam Je, Bae Sung Book Ju Soong Weo 8-Feb-08 1.8 Brazil Rio de Janeiro Brazilian Jiu-Jitsu Breno Sivak, Renato Barreto Royler Gracie 15-Feb-08 1.9 Israel Netanya Krav Maga Ran Nakash Avivit Oftek Cohen 22-Feb-08 how many masters fought using a boxing style ? Question #1: select count masters from table where style = boxing Aug.PntNet: STAMP: select count masters from table where martial art/style = boxing when did the episode featuring a master using brazilian jiu-jitsu air ? Question #2: select original airdate from table where masters = brazilian jiu-jitsu Aug.PntNet: STAMP: select original airdate from table where martial art/style = brazilian jiu-jitsu Figure 3: Case study on the dev set between Aug.PntNet and STAMP. These two questions are based on the same table. Each question is followed by the generated SQL queries from the two approaches. LECT columns, 15.7% of them have a different number of conditions in the WHERE clause, and 53.7% of them have a different WHERE column set compared to the ground truth. Afterwards, we analyze a portion of randomly sampled dissatisfied examples. Consistent with the qualitative results, most problems come from column prediction, including both SELECT clause and WHERE clause. Even though the overall accuracy of the SELECT column prediction is about 90% and we also use cell information to enhance the column representation, this semantic gap is still the main bottleneck. Extracting and incorporating various expressions for a table column (i.e. relation in a relational database) might be a potential way to mitigate this problem. Compared to column prediction, the quality of cell prediction is much better because cell content typically (partially) appears in the question. 5.7 Transfers to WikiTableQuestions WikiTableQuestions (Pasupat and Liang, 2015) is a widely used dataset for semantic parsing. To further test the performance of our approach, we conduct an additional transfer learning experiment. Firstly, we directly apply the STAMP model trained on WikiSQL to WikiTableQuestions, which is an unsupervised learning setting for the WikiTableQuestions dataset. Results show that the test accuracy of STAMP in this setting is 14.5%, which has a big gap between best systems on WikiTableQuestions, where Zhang et al. (2017) and Krishnamurthy et al. (2017) yield 43.3% and 43.7%, respectively. Furthermore, we apply the learnt STAMP model to generate SQL queries on natural language questions from WikiTableQuestions, and regard the generated SQL queries which could be executed to correct answers as additional pseudo question-SQL pairs. In this way, the STAMP model learnt from a combination of WikiSQL and pseudo question-SQL pairs could achieve 21.0% on the test set. We find that this big gap is caused by the difference between the two datasets. Among 8 types of questions in WikiTableQuestions, half of them including {“Union”, “Intersection”, “Reverse”, “Arithmetic”} are not covered in the WikiSQL dataset. It is an interesting direction to leverage algorithms developed from two datasets to improve one another. 5.8 Discussion Compared to slot-filling based models that restrict target SQL queries to fixed forms of “selectaggregator-where”, our model is less tailored. We believe that it is easy to expand our model to generate nested SQL queries or JOIN clauses, which could also be easily trained with back-propagation if enough training instances of these SQL types are available. For example, we could incorporate a hierarchical “value” channel to handle nest queries. Let us suppose our decoder works horizontally that next generated token is at the right hand of the current token. Inspired by chunk-based decoder for neural machine translation (Ishiwatari et al., 369 2017), we could increase the depth of the “value” channel to generates tokens of a nested WHERE value along the vertical axis. During inference, an addition gating function might be necessary to determine whether to generate a nested query, followed by the generation of WHERE value. An intuitive way that extends our model to handle JOIN clauses is to add the 4th channel, which predicts a table from a collection of tables. Therefore, the decoder should learn to select one of the four channels at each time step. Accordingly, we need to add “from” as a new SQL keyword in order to generate SQL queries including “from xxxTable”. In terms of the syntax of SQL, the grammar we used in this work could be regarded as shallow syntax, such as three channels and column-cell relation. We do not use deep syntax, such as the sketch of SQL language utilized in some slot-filling models, because incorporating them would make the model clumpy. Instead, we let the model to learn the sequential and compositional relations of SQL queries automatically from data. Empirical results show that our model learns these patterns well. 6 Conclusion and Future Work In this work, we develop STAMP, a Syntax- and Table- Aware seMantic Parser that automatically maps natural language questions to SQL queries, which could be executed on web table or relational dataset to get the answer. STAMP has three channels, and it learns to switch to which channel at each time step. STAMP considers cell information and the relation between cell and column name in the generation process. Experiments are conducted on the WikiSQL dataset. Results show that STAMP achieves the new state-of-the-art performance on WikiSQL. We conduct extensive experiment analysis to show advantages and limitations of our approach, and where is the room for others to make further improvements. SQL language has more complicated queries than the cases included in the WikiSQL dataset, including (1) querying over multiple relational databases, (2) nested SQL query as condition value, (3) more operations such as “group by” and “order by”, etc. In this work, the STAMP model is not designed for the first and second cases, but it could be easily adapted to the third case through incorporating additional SQL keywords and of course the learning of which requires dataset of the same type. In the future, we plan to improve the accuracy of the column prediction component. We also plan to build a large-scale dataset that considers more sophisticated SQL queries. We also plan to extend the approach to low-resource scenarios (Feng et al., 2018). Acknowledgments We thank Yaming Sun for her great help. We also would like to thank three anonymous reviewers for their valuable comments and suggestions. This research was partly supported by National Natural Science Foundation of China(No. 61632011 and No.61772156, and No.61472107). References Yoav Artzi and Luke Zettlemoyer. 2013. Weakly supervised learning of semantic parsers for mapping instructions to actions. Transactions of the Association for Computational Linguistics 1:49–62. Dzmitry Bahdanau, Kyunghyun Cho, and Yoshua Bengio. 2015. Neural machine translation by jointly learning to align and translate. Proceeding of ICLR . Jonathan Berant, Andrew Chou, Roy Frostig, and Percy Liang. 2013. Semantic parsing on freebase from question-answer pairs. In EMNLP. 5, page 6. Florin Brad, Radu Cristian Alexandru Iacob, Ionel Alexandru Hosu, and Traian Rebedea. 2017. Dataset for a neural natural language interface for databases (nnlidb). In Proceedings of the Eighth International Joint Conference on Natural Language Processing (Volume 1: Long Papers). Asian Federation of Natural Language Processing, Taipei, Taiwan, pages 906–914. http://www.aclweb.org/anthology/I17-1091. Kyunghyun Cho, Bart van Merrienboer, Caglar Gulcehre, Dzmitry Bahdanau, Fethi Bougares, Holger Schwenk, and Yoshua Bengio. 2014. Learning phrase representations using rnn encoder–decoder for statistical machine translation. In Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing (EMNLP). Association for Computational Linguistics, Doha, Qatar, pages 1724–1734. http://www.aclweb.org/anthology/D141179. Deborah A Dahl, Madeleine Bates, Michael Brown, William Fisher, Kate Hunicke-Smith, David Pallett, Christine Pao, Alexander Rudnicky, and Elizabeth Shriberg. 1994. Expanding the scope of the atis task: The atis-3 corpus. In Proceedings of the workshop on Human Language Technology. Association for Computational Linguistics, pages 43–48. 370 Li Dong and Mirella Lapata. 2016. Language to logical form with neural attention. In Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers). Association for Computational Linguistics, Berlin, Germany, pages 33–43. http://www.aclweb.org/anthology/P16-1004. Li Dong, Furu Wei, Hong Sun, Ming Zhou, and Ke Xu. 2015. A hybrid neural model for type classification of entity mentions. In IJCAI. pages 1243–1249. Chris Dyer, Adhiguna Kuncoro, Miguel Ballesteros, and Noah A. Smith. 2016. Recurrent neural network grammars. In Proceedings of the 2016 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies. Association for Computational Linguistics, San Diego, California, pages 199–209. http://www.aclweb.org/anthology/N16-1024. Xiaocheng Feng, Xiachong Feng, Bing Qin, Zhangyin Feng, and Ting Liu. 2018. Improving low resource named entity recognition using cross-lingual knowledge transfer. In IJCAI. Alessandra Giordani and Alessandro Moschitti. 2012. Translating questions to sql queries with generative parsers discriminatively reranked. In COLING (Posters). pages 401–410. Omer Goldman, Veronica Latcinnik, Udi Naveh, Amir Globerson, and Jonathan Berant. 2017. Weakly-supervised semantic parsing with abstract examples. CoRR abs/1711.05240. http://arxiv.org/abs/1711.05240. Jiatao Gu, Zhengdong Lu, Hang Li, and Victor O.K. Li. 2016. Incorporating copying mechanism in sequence-to-sequence learning. In Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers). Association for Computational Linguistics, Berlin, Germany, pages 1631–1640. http://www.aclweb.org/anthology/P16-1154. Caglar Gulcehre, Sungjin Ahn, Ramesh Nallapati, Bowen Zhou, and Yoshua Bengio. 2016. Pointing the unknown words. In Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers). Association for Computational Linguistics, Berlin, Germany, pages 140–149. http://www.aclweb.org/anthology/P16-1014. Tong Guo and Huilin Gao. 2018. Bidirectional attention for SQL generation. CoRR abs/1801.00076. http://arxiv.org/abs/1801.00076. Kelvin Guu, Panupong Pasupat, Evan Liu, and Percy Liang. 2017. From language to programs: Bridging reinforcement learning and maximum marginal likelihood. In Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics. pages 1051–1062. Kazuma Hashimoto, Caiming Xiong, Yoshimasa Tsuruoka, and Richard Socher. 2016. A joint many-task model: Growing a neural network for multiple nlp tasks. arXiv preprint arXiv:1611.01587 . Ronghang Hu, Jacob Andreas, Marcus Rohrbach, Trevor Darrell, and Kate Saenko. 2017. Learning to reason: End-to-end module networks for visual question answering. International Conference on Computer Vision (ICCV). . Shonosuke Ishiwatari, Jingtao Yao, Shujie Liu, Mu Li, Ming Zhou, Naoki Yoshinaga, Masaru Kitsuregawa, and Weijia Jia. 2017. Chunk-based decoder for neural machine translation. In Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers). Association for Computational Linguistics, Vancouver, Canada, pages 1901–1912. http://aclweb.org/anthology/P171174. Srinivasan Iyer, Ioannis Konstas, Alvin Cheung, Jayant Krishnamurthy, and Luke Zettlemoyer. 2017. Learning a neural semantic parser from user feedback. In Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers). Association for Computational Linguistics, Vancouver, Canada, pages 963–973. http://aclweb.org/anthology/P17-1089. Mohit Iyyer, Wen-tau Yih, and Ming-Wei Chang. 2017. Search-based neural structured learning for sequential question answering. In Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers). Association for Computational Linguistics, Vancouver, Canada, pages 1821–1831. http://aclweb.org/anthology/P171167. Robin Jia and Percy Liang. 2016. Data recombination for neural semantic parsing. In Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers). Association for Computational Linguistics, Berlin, Germany, pages 12–22. http://www.aclweb.org/anthology/P16-1002. Justin Johnson, Bharath Hariharan, Laurens van der Maaten, Judy Hoffman, Li Fei-Fei, C Lawrence Zitnick, and Ross Girshick. 2017. Inferring and executing programs for visual reasoning. International Conference on Computer Vision (ICCV). . Rudolf Kadlec, Martin Schmid, Ondˇrej Bajgar, and Jan Kleindienst. 2016. Text understanding with the attention sum reader network. In Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers). Association for Computational Linguistics, Berlin, Germany, pages 908–918. http://www.aclweb.org/anthology/P16-1086. Tom´aˇs Koˇcisk´y, G´abor Melis, Edward Grefenstette, Chris Dyer, Wang Ling, Phil Blunsom, and Karl Moritz Hermann. 2016. Semantic parsing with 371 semi-supervised sequential autoencoders. In Proceedings of the 2016 Conference on Empirical Methods in Natural Language Processing. Association for Computational Linguistics, Austin, Texas, pages 1078–1087. https://aclweb.org/anthology/D161116. Jayant Krishnamurthy, Pradeep Dasigi, and Matt Gardner. 2017. Neural semantic parsing with type constraints for semi-structured tables. In Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing. Association for Computational Linguistics, Copenhagen, Denmark, pages 1517–1527. https://www.aclweb.org/anthology/D17-1160. Jayant Krishnamurthy and Thomas Kollar. 2013. Jointly learning to parse and perceive: Connecting natural language to the physical world. Transactions of the Association for Computational Linguistics 1:193–206. Tom Kwiatkowski, Luke Zettlemoyer, Sharon Goldwater, and Mark Steedman. 2011. Lexical generalization in ccg grammar induction for semantic parsing. In Proceedings of the Conference on Empirical Methods in Natural Language Processing. Association for Computational Linguistics, pages 1512– 1523. Fei Li and HV Jagadish. 2014. Constructing an interactive natural language interface for relational databases. Proceedings of the VLDB Endowment 8(1):73– 84. Chen Liang, Jonathan Berant, Quoc Le, Kenneth D. Forbus, and Ni Lao. 2017. Neural symbolic machines: Learning semantic parsers on freebase with weak supervision. In Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers). Association for Computational Linguistics, Vancouver, Canada, pages 23–33. http://aclweb.org/anthology/P171003. Percy Liang. 2016. Learning executable semantic parsers for natural language understanding. Communications of the ACM 59(9):68–76. Percy Liang, Michael I Jordan, and Dan Klein. 2011. Learning dependency-based compositional semantics. In Proceedings of the 49th Annual Meeting of the Association for Computational Linguistics. pages 590–599. Xi Victoria Lin, Chenglong Wang, Deric Pang, Kevin Vu, Luke Zettlemoyer, and Michael D. Ernst. 2017. Program synthesis from natural language using recurrent neural networks. Technical Report UWCSE-17-03-01, University of Washington Department of Computer Science and Engineering, Seattle, WA, USA. Reginald Long, Panupong Pasupat, and Percy Liang. 2016. Simpler context-dependent logical forms via model projections. In Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers). Association for Computational Linguistics, Berlin, Germany, pages 1456–1465. http://www.aclweb.org/anthology/P16-1138. Lili Mou, Zhengdong Lu, Hang Li, and Zhi Jin. 2017. Coupling distributed and symbolic execution for natural language queries. In Proceedings of the 34th International Conference on Machine Learning, ICML 2017, Sydney, NSW, Australia, 6-11 August 2017. pages 2518–2526. http://proceedings.mlr.press/v70/mou17a.html. Arvind Neelakantan, Quoc V Le, Martin Abadi, Andrew McCallum, and Dario Amodei. 2016. Learning a natural language interface with neural programmer. arXiv preprint arXiv:1611.08945 . Panupong Pasupat and Percy Liang. 2015. Compositional semantic parsing on semi-structured tables. In Proceedings of the 53rd Annual Meeting of the Association for Computational Linguistics and the 7th International Joint Conference on Natural Language Processing (Volume 1: Long Papers). Association for Computational Linguistics, Beijing, China, pages 1470–1480. http://www.aclweb.org/anthology/P15-1142. Panupong Pasupat and Percy Liang. 2016. Inferring logical forms from denotations. In Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics. pages 23–32. Jeffrey Pennington, Richard Socher, and Christopher Manning. 2014. Glove: Global vectors for word representation. In Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing (EMNLP). Association for Computational Linguistics, Doha, Qatar, pages 1532–1543. http://www.aclweb.org/anthology/D14-1162. Hoifung Poon. 2013. Grounded unsupervised semantic parsing. In ACL (1). pages 933–943. Ana-Maria Popescu, Oren Etzioni, and Henry Kautz. 2003. Towards a theory of natural language interfaces to databases. In Proceedings of the 8th international conference on Intelligent user interfaces. ACM, pages 149–157. Maxim Rabinovich, Mitchell Stern, and Dan Klein. 2017. Abstract syntax networks for code generation and semantic parsing. arXiv preprint arXiv:1704.07535 . Alane Suhr, Mike Lewis, James Yeh, and Yoav Artzi. 2017. A corpus of natural language for visual reasoning. In Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (Volume 2: Short Papers). Association for Computational Linguistics, Vancouver, Canada, pages 217– 223. http://aclweb.org/anthology/P17-2034. 372 Huan Sun, Hao Ma, Xiaodong He, Wen-tau Yih, Yu Su, and Xifeng Yan. 2016. Table cell search for question answering. In Proceedings of the 25th International Conference on World Wide Web. International World Wide Web Conferences Steering Committee, pages 771–782. Ilya Sutskever, Oriol Vinyals, and Quoc V Le. 2014. Sequence to sequence learning with neural networks. In Advances in neural information processing systems. pages 3104–3112. Oriol Vinyals, Meire Fortunato, and Navdeep Jaitly. 2015. Pointer networks. In Advances in Neural Information Processing Systems. pages 2692–2700. Ronald J Williams. 1992. Simple statistical gradientfollowing algorithms for connectionist reinforcement learning. Machine learning 8(3-4):229–256. Yuk Wah Wong and Raymond J Mooney. 2007. Learning synchronous grammars for semantic parsing with lambda calculus. In Annual MeetingAssociation for computational Linguistics. 1, page 960. Shuangzhi Wu, Dongdong Zhang, Nan Yang, Mu Li, and Ming Zhou. 2017. Sequence-to-dependency neural machine translation. In Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers). Association for Computational Linguistics, Vancouver, Canada, pages 698–707. http://aclweb.org/anthology/P17-1065. Chunyang Xiao, Marc Dymetman, and Claire Gardent. 2016. Sequence-based structured prediction for semantic parsing. In Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers). Association for Computational Linguistics, Berlin, Germany, pages 1341–1350. http://www.aclweb.org/anthology/P161127. Xiaojun Xu, Chang Liu, and Dawn Song. 2017. Sqlnet: Generating structured queries from natural language without reinforcement learning. arXiv preprint arXiv:1711.04436 . Navid Yaghmazadeh, Yuepeng Wang, Isil Dillig, and Thomas Dillig. 2017. Type-and content-driven synthesis of sql queries from natural language. arXiv preprint arXiv:1702.01168 . Pengcheng Yin, Zhengdong Lu, Hang Li, and Ben Kao. 2015. Neural enquirer: Learning to query tables with natural language. arXiv preprint arXiv:1512.00965 . Pengcheng Yin and Graham Neubig. 2017. A syntactic neural model for general-purpose code generation. arXiv preprint arXiv:1704.01696 . Tao Yu, Zifan Li, Zilin Zhang, Rui Zhang, and Dragomir Radev. 2018. Typesql: Knowledge-based type-aware neural text-to-sql generation. arXiv preprint arXiv:1804.09769 . Wojciech Zaremba and Ilya Sutskever. 2015. Reinforcement learning neural turing machines. arXiv preprint arXiv:1505.00521 419. John M Zelle and Raymond J Mooney. 1996. Learning to parse database queries using inductive logic programming. In Proceedings of the national conference on artificial intelligence. pages 1050–1055. Luke S. Zettlemoyer and Michael Collins. 2005. Learning to map sentences to logical form: Structured classification with probabilistic categorial grammars. In UAI ’05, Proceedings of the 21st Conference in Uncertainty in Artificial Intelligence. pages 658–666. Luke S Zettlemoyer and Michael Collins. 2007. Online learning of relaxed ccg grammars for parsing to logical form. In EMNLP-CoNLL. pages 678–687. Yuchen Zhang, Panupong Pasupat, and Percy Liang. 2017. Macro grammars and holistic triggering for efficient semantic parsing. In Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing. Association for Computational Linguistics, Copenhagen, Denmark, pages 1225–1234. https://www.aclweb.org/anthology/D17-1126. Victor Zhong, Caiming Xiong, and Richard Socher. 2017. Seq2sql: Generating structured queries from natural language using reinforcement learning. arXiv preprint arXiv:1709.00103 .
2018
34
Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Long Papers), pages 373–385 Melbourne, Australia, July 15 - 20, 2018. c⃝2018 Association for Computational Linguistics 373 Multitask Parsing Across Semantic Representations Daniel Hershcovich1,2 Omri Abend2 1The Edmond and Lily Safra Center for Brain Sciences 2School of Computer Science and Engineering Hebrew University of Jerusalem {danielh,oabend,arir}@cs.huji.ac.il Ari Rappoport2 Abstract The ability to consolidate information of different types is at the core of intelligence, and has tremendous practical value in allowing learning for one task to benefit from generalizations learned for others. In this paper we tackle the challenging task of improving semantic parsing performance, taking UCCA parsing as a test case, and AMR, SDP and Universal Dependencies (UD) parsing as auxiliary tasks. We experiment on three languages, using a uniform transition-based system and learning architecture for all parsing tasks. Despite notable conceptual, formal and domain differences, we show that multitask learning significantly improves UCCA parsing in both in-domain and out-of-domain settings. Our code is publicly available.1 1 Introduction Semantic parsing has arguably yet to reach its full potential in terms of its contribution to downstream linguistic tasks, partially due to the limited amount of semantically annotated training data. This shortage is more pronounced in languages other than English, and less researched domains. Indeed, recent work in semantic parsing has targeted, among others, Abstract Meaning Representation (AMR; Banarescu et al., 2013), bilexical Semantic Dependencies (SDP; Oepen et al., 2016) and Universal Conceptual Cognitive Annotation (UCCA; Abend and Rappoport, 2013). While these schemes are formally different and focus on different distinctions, much of their semantic content is shared (Abend and Rappoport, 2017). Multitask learning (MTL; Caruana, 1997) allows exploiting the overlap between tasks to ef1http://github.com/danielhers/tupa fectively extend the training data, and has greatly advanced with neural networks and representation learning (see §2). We build on these ideas and propose a general transition-based DAG parser, able to parse UCCA, AMR, SDP and UD (Nivre et al., 2016). We train the parser using MTL to obtain significant improvements on UCCA parsing over single-task training in (1) in-domain and (2) outof-domain settings in English; (3) an in-domain setting in German; and (4) an in-domain setting in French, where training data is scarce. The novelty of this work is in proposing a general parsing and learning architecture, able to accommodate such widely different parsing tasks, and in leveraging it to show benefits from learning them jointly. 2 Related Work MTL has been used over the years for NLP tasks with varying degrees of similarity, examples including joint classification of different arguments in semantic role labeling (Toutanova et al., 2005), and joint parsing and named entity recognition (Finkel and Manning, 2009). Similar ideas, of parameter sharing across models trained with different datasets, can be found in studies of domain adaptation (Blitzer et al., 2006; Daume III, 2007; Ziser and Reichart, 2017). For parsing, domain adaptation has been applied successfully in parser combination and co-training (McClosky et al., 2010; Baucom et al., 2013). Neural MTL has mostly been effective in tackling formally similar tasks (Søgaard and Goldberg, 2016), including multilingual syntactic dependency parsing (Ammar et al., 2016; Guo et al., 2016), as well as multilingual (Duong et al., 2017), and cross-domain semantic parsing (Herzig and Berant, 2017; Fan et al., 2017). Sharing parameters with a low-level task has 374 shown great benefit for transition-based syntactic parsing, when jointly training with POS tagging (Bohnet and Nivre, 2012; Zhang and Weiss, 2016), and with lexical analysis (Constant and Nivre, 2016; More, 2016). Recent work has achieved state-of-the-art results in multiple NLP tasks by jointly learning the tasks forming the NLP standard pipeline using a single neural model (Collobert et al., 2011; Hashimoto et al., 2017), thereby avoiding cascading errors, common in pipelines. Much effort has been devoted to joint learning of syntactic and semantic parsing, including two CoNLL shared tasks (Surdeanu et al., 2008; Hajiˇc et al., 2009). Despite their conceptual and practical appeal, such joint models rarely outperform the pipeline approach (Llu´ıs and M`arquez, 2008; Henderson et al., 2013; Lewis et al., 2015; Swayamdipta et al., 2016, 2017). Peng et al. (2017a) performed MTL for SDP in a closely related setting to ours. They tackled three tasks, annotated over the same text and sharing the same formal structures (bilexical DAGs), with considerable edge overlap, but differing in target representations (see §3). For all tasks, they reported an increase of 0.5-1 labeled F1 points. Recently, Peng et al. (2018) applied a similar approach to joint frame-semantic parsing and semantic dependency parsing, using disjoint datasets, and reported further improvements. 3 Tackled Parsing Tasks In this section, we outline the parsing tasks we address. We focus on representations that produce full-sentence analyses, i.e., produce a graph covering all (content) words in the text, or the lexical concepts they evoke. This contrasts with “shallow” semantic parsing, primarily semantic role labeling (SRL; Gildea and Jurafsky, 2002; Palmer et al., 2005), which targets argument structure phenomena using flat structures. We consider four formalisms: UCCA, AMR, SDP and Universal Dependencies. Figure 1 presents one sentence annotated in each scheme. Universal Conceptual Cognitive Annotation. UCCA (Abend and Rappoport, 2013) is a semantic representation whose main design principles are ease of annotation, cross-linguistic applicability, and a modular architecture. UCCA represents the semantics of linguistic utterances as directed acyclic graphs (DAGs), where terminal (childless) nodes correspond to the text tokens, and nonAfter L graduation P H , U John A moved P to R Paris C A H A LR LA LA (a) UCCA move-01 after graduate-01 op1 time person name ”John” op1 name ARG0 city name ”Paris” op1 name ARG2 ARG0 (b) AMR After graduation , John moved to Paris ARG2 ARG1 ARG1 top ARG2 ARG1 ARG2 (c) DM After graduation , John moved to Paris case punct nsubj obl case root obl (d) UD Figure 1: Example graph for each task. Figure 1a presents a UCCA graph. The dashed edge is remote, while the blue node and its outgoing edges represent inter-Scene linkage. Pre-terminal nodes and edges are omitted for brevity. Figure 1b presents an AMR graph. Text tokens are not part of the graph, and must be matched to concepts and constants by alignment. Variables are represented by their concepts. Figure 1c presents a DM semantic dependency graph, containing multiple roots: “After”, “moved” and “to”, of which “moved” is marked as top. Punctuation tokens are excluded from SDP graphs. Figure 1d presents a UD tree. Edge labels express syntactic relations. terminal nodes to semantic units that participate in some super-ordinate relation. Edges are labeled, indicating the role of a child in the relation the parent represents. Nodes and edges belong to one of several layers, each corresponding to a “module” of semantic distinctions. UCCA’s foundational layer (the only layer for which annotated data exists) mostly covers predicate-argument structure, semantic heads and inter-Scene relations. UCCA distinguishes primary edges, corresponding to explicit relations, from remote edges 375 (appear dashed in Figure 1a) that allow for a unit to participate in several super-ordinate relations. Primary edges form a tree in each layer, whereas remote edges enable reentrancy, forming a DAG. Abstract Meaning Representation. AMR (Banarescu et al., 2013) is a semantic representation that encodes information about named entities, argument structure, semantic roles, word sense and co-reference. AMRs are rooted directed graphs, in which both nodes and edges are labeled. Most AMRs are DAGs, although cycles are permitted. AMR differs from the other schemes we consider in that it does not anchor its graphs in the words of the sentence (Figure 1b). Instead, AMR graphs connect variables, concepts (from a predefined set) and constants (which may be strings or numbers). Still, most AMR nodes are alignable to text tokens, a tendency used by AMR parsers, which align a subset of the graph nodes to a subset of the text tokens (concept identification). In this work, we use pre-aligned AMR graphs. Despite the brief period since its inception, AMR has been targeted by a number of works, notably in two SemEval shared tasks (May, 2016; May and Priyadarshi, 2017). To tackle its variety of distinctions and unrestricted graph structure, AMR parsers often use specialized methods. Graph-based parsers construct AMRs by identifying concepts and scoring edges between them, either in a pipeline fashion (Flanigan et al., 2014; Artzi et al., 2015; Pust et al., 2015; Foland and Martin, 2017), or jointly (Zhou et al., 2016). Another line of work trains machine translation models to convert strings into linearized AMRs (Barzdins and Gosko, 2016; Peng et al., 2017b; Konstas et al., 2017; Buys and Blunsom, 2017b). Transition-based AMR parsers either use dependency trees as pre-processing, then mapping them into AMRs (Wang et al., 2015a,b, 2016; Goodman et al., 2016), or use a transition system tailored to AMR parsing (Damonte et al., 2017; Ballesteros and Al-Onaizan, 2017). We differ from the above approaches in addressing AMR parsing using the same general DAG parser used for other schemes. Semantic Dependency Parsing. SDP uses a set of related representations, targeted in two recent SemEval shared tasks (Oepen et al., 2014, 2015), and extended by Oepen et al. (2016). They correspond to four semantic representation schemes, referred to as DM, PAS, PSD and CCD, representing predicate-argument relations between content words in a sentence. All are based on semantic formalisms converted into bilexical dependencies— directed graphs whose nodes are text tokens. Edges are labeled, encoding semantic relations between the tokens. Non-content tokens, such as punctuation, are left out of the analysis (see Figure 1c). Graphs containing cycles have been removed from the SDP datasets. We use one of the representations from the SemEval shared tasks: DM (DELPH-IN MRS), converted from DeepBank (Flickinger et al., 2012), a corpus of hand-corrected parses from LinGO ERG (Copestake and Flickinger, 2000), an HPSG (Pollard and Sag, 1994) using Minimal Recursion Semantics (Copestake et al., 2005). Universal Dependencies. UD (Nivre et al., 2016, 2017) has quickly become the dominant dependency scheme for syntactic annotation in many languages, aiming for cross-linguistically consistent and coarse-grained treebank annotation. Formally, UD uses bilexical trees, with edge labels representing syntactic relations between words. We use UD as an auxiliary task, inspired by previous work on joint syntactic and semantic parsing (see §2). In order to reach comparable analyses cross-linguistically, UD often ends up in annotation that is similar to the common practice in semantic treebanks, such as linking content words to content words wherever possible. Using UD further allows conducting experiments on languages other than English, for which AMR and SDP annotated data is not available (§7). In addition to basic UD trees, we use the enhanced++ UD graphs available for English, which are generated by the Stanford CoreNLP converters (Schuster and Manning, 2016).2 These include additional and augmented relations between content words, partially overlapping with the notion of remote edges in UCCA: in the case of control verbs, for example, a direct relation is added in enhanced++ UD between the subordinated verb and its controller, which is similar to the semantic schemes’ treatment of this construction. 4 General Transition-based DAG Parser All schemes considered in this work exhibit reentrancy and discontinuity (or non-projectivity), to varying degrees. In addition, UCCA and AMR 2http://github.com/stanfordnlp/CoreNLP 376 contain non-terminal nodes. To parse these graphs, we extend TUPA (Hershcovich et al., 2017), a transition-based parser originally developed for UCCA, as it supports all these structural properties. TUPA’s transition system can yield any labeled DAG whose terminals are anchored in the text tokens. To support parsing into AMR, which uses graphs that are not anchored in the tokens, we take advantage of existing alignments of the graphs with the text tokens during training (§5). First used for projective syntactic dependency tree parsing (Nivre, 2003), transition-based parsers have since been generalized to parse into many other graph families, such as (discontinuous) constituency trees (e.g., Zhang and Clark, 2009; Maier and Lichte, 2016), and DAGs (e.g., Sagae and Tsujii, 2008; Du et al., 2015). Transition-based parsers apply transitions incrementally to an internal state defined by a buffer B of remaining tokens and nodes, a stack S of unresolved nodes, and a labeled graph G of constructed nodes and edges. When a terminal state is reached, the graph G is the final output. A classifier is used at each step to select the next transition, based on features that encode the current state. 4.1 TUPA’s Transition Set Given a sequence of tokens w1, . . . , wn, we predict a rooted graph G whose terminals are the tokens. Parsing starts with the root node on the stack, and the input tokens in the buffer. The TUPA transition set includes the standard SHIFT and REDUCE operations, NODEX for creating a new non-terminal node and an X-labeled edge, LEFT-EDGEX and RIGHT-EDGEX to create a new primary X-labeled edge, LEFT-REMOTEX and RIGHT-REMOTEX to create a new remote X-labeled edge, SWAP to handle discontinuous nodes, and FINISH to mark the state as terminal. Although UCCA contains nodes without any text tokens as descendants (called implicit units), these nodes are infrequent and only cover 0.5% of non-terminal nodes. For this reason we follow previous work (Hershcovich et al., 2017) and discard implicit units from the training and evaluation, and so do not include transitions for creating them. In AMR, implicit units are considerably more common, as any unaligned concept with no aligned descendents is implicit (about 6% of the nodes). Implicit AMR nodes usually result from alignment errors, or from abstract concepts which Parser state S , B John moved to Paris . G After L graduation P H Classifier BiLSTM Embeddings After graduation to Paris ... MLP transition softmax Figure 2: Illustration of the TUPA model, adapted from Hershcovich et al. (2017). Top: parser state. Bottom: BiLTSM architecture. have no explicit realization in the text (Buys and Blunsom, 2017a). We ignore implicit nodes when training on AMR as well. TUPA also does not support node labels, which are ubiquitous in AMR but absent in UCCA structures (only edges are labeled in UCCA). We therefore only produce edge labels and not node labels when training on AMR. 4.2 Transition Classifier To predict the next transition at each step, we use a BiLSTM with embeddings as inputs, followed by an MLP and a softmax layer for classification (Kiperwasser and Goldberg, 2016). The model is illustrated in Figure 2. Inference is performed greedily, and training is done with an oracle that yields the set of all optimal transitions at a given state (those that lead to a state from which the gold graph is still reachable). Out of this set, the actual transition performed in training is the one with the highest score given by the classifier, which is trained to maximize the sum of log-likelihoods of all optimal transitions at each step. Features. We use the original TUPA features, representing the words, POS tags, syntactic dependency relations, and previously predicted edge labels for nodes in specific locations in the parser state. In addition, for each token we use embeddings representing the one-character prefix, threecharacter suffix, shape (capturing orthographic 377 After L graduation P H , U John A moved P to R Paris C A H A (a) UCCA moved After graduation op time John name ARG0 Paris name ARG2 ARG0 (b) AMR Afterggraduation , root g John ARG1 movedg head tog Parisg root top head ARG2 ARG1 ARG1 head ARG2 ARG2 (c) DM Afterg case graduation head obl ,g Johng movedgtog case Parisg head obl punct nsubj head (d) UD Figure 3: Graphs from Figure 1, after conversion to the unified DAG format (with pre-terminals omitted: each terminal drawn in place of its parent). Figure 3a presents a converted UCCA graph. Linkage nodes and edges are removed, but the original graph is otherwise preserved. Figure 3b presents a converted AMR graph, with text tokens added according to the alignments. Numeric suffixes of op relations are removed, and names collapsed. Figure 3c presents a converted SDP graph (in the DM representation), with intermediate non-terminal head nodes introduced. In case of reentrancy, an arbitrary reentrant edge is marked as remote. Figure 3d presents a converted UD graph. As in SDP, intermediate nonterminals and head edges are introduced. While converted UD graphs form trees, enhanced++ UD graphs may not. features, e.g., “Xxxx”), and named entity type,3 all provided by spaCy (Honnibal and Montani, 2018).4 To the learned word vectors, we concatenate the 250K most frequent word vectors from 3See Supplementary Material for a full listing of features. 4http://spacy.io fastText (Bojanowski et al., 2017),5 pre-trained over Wikipedia and updated during training. Constraints. As each annotation scheme has different constraints on the allowed graph structures, we apply these constraints separately for each task. During training and parsing, the relevant constraint set rules out some of the transitions according to the parser state. Some constraints are task-specific, others are generic. For example, in UCCA, a terminal may only have one parent. In AMR, a concept corresponding to a PropBank frame may only have the core arguments defined for the frame as children. An example of a generic constraint is that stack nodes that have been swapped should not be swapped again.6 5 Unified DAG Format To apply our parser to the four target tasks (§3), we convert them into a unified DAG format, which is inclusive enough to allow representing any of the schemes with very little loss of information.7 The format consists of a rooted DAG, where the tokens are the terminal nodes. As in the UCCA format, edges are labeled (but not nodes), and are divided into primary and remote edges, where the primary edges form a tree (all nodes have at most one primary parent, and the root has none). Remote edges enable reentrancy, and thus together with primary edges form a DAG. Figure 3 shows examples for converted graphs. Converting UCCA into the unified format consists simply of removing linkage nodes and edges (see Figure 3a), which were also discarded by Hershcovich et al. (2017). Converting bilexical dependencies. To convert DM and UD into the unified DAG format, we add a pre-terminal for each token, and attach the preterminals according to the original dependency edges: traversing the tree from the root down, for each head token we create a non-terminal parent with the edge label head, and add the node’s dependents as children of the created non-terminal node (see Figures 3c and 3d). Since DM allows multiple roots, we form a single root node, whose 5http://fasttext.cc 6 To implement this constraint, we define a swap index for each node, assigned when the node is created. At initialization, only the root node and terminals exist. We assign the root a swap index of 0, and for each terminal, its position in the text (starting at 1). Whenever a node is created as a result of a NODE transition, its swap index is the arithmetic mean of the swap indices of the stack top and buffer head. 7See Supplementary Material for more conversion details. 378 Parser state . . . Classifier Task-specific BiLSTM Shared BiLSTM Shared embeddings After graduation to Paris ... Task-specific MLP transition softmax Figure 4: MTL model. Token representations are computed both by a task-specific and a shared BiLSTM. Their outputs are concatenated with the parser state embedding, identical to Figure 2, and fed into the task-specific MLP for selecting the next transition. Shared parameters are shown in blue. children are the original roots. The added edges are labeled root, where top nodes are labeled top instead. In case of reentrancy, an arbitrary parent is marked as primary, and the rest as remote (denoted as dashed edges in Figure 3). Converting AMR. In the conversion from AMR, node labels are dropped. Since alignments are not part of the AMR graph (see Figure 3b), we use automatic alignments (see §7), and attach each node with an edge to each of its aligned terminals. Named entities in AMR are represented as a subgraph, whose name-labeled root has a child for each token in the name (see the two name nodes in Figure 1b). We collapse this subgraph into a single node whose children are the name tokens. 6 Multitask Transition-based Parsing Now that the same model can be applied to different tasks, we can train it in a multitask setting. The fairly small training set available for UCCA (see §7) makes MTL particularly appealing, and we focus on it in this paper, treating AMR, DM and UD parsing as auxiliary tasks. Following previous work, we share only some of the parameters (Klerke et al., 2016; Søgaard and Goldberg, 2016; Bollmann and Søgaard, 2016; Plank, 2016; Braud et al., 2016; Mart´ınez Alonso and Plank, 2017; Peng et al., 2017a, 2018), leaving task-specific sub-networks as well. Concretely, we keep the BiLSTM used by TUPA for the main task (UCCA parsing), add a BiLSTM that is shared across all tasks, and replicate the MLP (feedforward sub-network) for each task. The BiLSTM outputs (concatenated for the main task) are fed into the task-specific MLP (see Figure 4). Feature embeddings are shared across tasks. Unlabeled parsing for auxiliary tasks. To simplify the auxiliary tasks and facilitate generalization (Bingel and Søgaard, 2017), we perform unlabeled parsing for AMR, DM and UD, while still predicting edge labels in UCCA parsing. To support unlabeled parsing, we simply remove all labels from the EDGE, REMOTE and NODE transitions output by the oracle. This results in a much smaller number of transitions the classifier has to select from (no more than 10, as opposed to 45 in labeled UCCA parsing), allowing us to use no BiLSTMs and fewer dimensions and layers for task-specific MLPs of auxiliary tasks (see §7). This limited capacity forces the network to use the shared parameters for all tasks, increasing generalization (Mart´ınez Alonso and Plank, 2017). 7 Experimental Setup We here detail a range of experiments to assess the value of MTL to UCCA parsing, training the parser in single-task and multitask settings, and evaluating its performance on the UCCA test sets in both in-domain and out-of-domain settings. Data. For UCCA, we use v1.2 of the English Wikipedia corpus (Wiki; Abend and Rappoport, 2013), with the standard train/dev/test split (see Table 1), and the Twenty Thousand Leagues Under the Sea corpora (20K; Sulem et al., 2015), annotated in English, French and German.8 For English and French we use 20K v1.0, a small parallel corpus comprising the first five chapters of the book. As in previous work (Hershcovich et al., 2017), we use the English part only as an out-of-domain test set. We train and test on the French part using the standard split, as well as the German corpus (v0.9), which is a pre-release and still contains a considerable amount of noisy annotation. Tuning is performed on the respective development sets. For AMR, we use LDC2017T10, identical to the dataset targeted in SemEval 2017 (May and Priyadarshi, 2017).9 For SDP, we use the DM representation from the SDP 2016 dataset (Oepen 8http://github.com/huji-nlp/ucca-corpora 9http://catalog.ldc.upenn.edu/LDC2017T10 379 English French German # tokens # sentences # tokens # sentences # tokens # sentences train dev test train dev test train dev test train dev test train dev test train dev test UCCA Wiki 128444 14676 15313 4268 454 503 20K 12339 506 10047 1558 1324 413 67 67 79894 10059 42366 3429 561 2164 AMR 648950 36521 DM 765025 33964 UD 458277 17062 899163 32347 268145 13814 Table 1: Number of tokens and sentences in the training, development and test sets we use for each corpus and language. et al., 2016).10 For Universal Dependencies, we use all English, French and German treebanks from UD v2.1 (Nivre et al., 2017).11 We use the enhanced++ UD representation (Schuster and Manning, 2016) in our English experiments, henceforth referred to as UD++. We use only the AMR, DM and UD training sets from standard splits. While UCCA is annotated over Wikipedia and over a literary corpus, the domains for AMR, DM and UD are blogs, news, emails, reviews, and Q&A. This domain difference between training and test is particularly challenging (see §9). Unfortunately, none of the other schemes have available annotation over Wikipedia text. Settings. We explore the following settings: (1) in-domain setting in English, training and testing on Wiki; (2) out-of-domain setting in English, training on Wiki and testing on 20K; (3) French indomain setting, where available training dataset is small, training and testing on 20K; (4) German indomain setting on 20K, with somewhat noisy annotation. For MTL experiments, we use unlabeled AMR, DM and UD++ parsing as auxiliary tasks in English, and unlabeled UD parsing in French and German.12 We also report baseline results training only the UCCA training sets. Training. We create a unified corpus for each setting, shuffling all sentences from relevant datasets together, but using only the UCCA development set F1 score as the early stopping criterion. In each training epoch, we use the same number of examples from each task—the UCCA training set size. Since training sets differ in size, we sample this many sentences from each one. The model is implemented using DyNet (Neubig et al., 2017).13 10http://sdp.delph-in.net/osdp-12.tgz 11http://hdl.handle.net/11234/1-2515 12We did not use AMR, DM or UD++ in French and German, as these are only available in English. 13http://dynet.io Multitask Hyperparameter Single Main Aux Shared Pre-trained word dim. 300 300 Learned word dim. 200 200 POS tag dim. 20 20 Dependency relation dim. 10 10 Named entity dim. 3 3 Punctuation dim. 1 1 Action dim. 3 3 Edge label dim. 20 20 MLP layers 2 2 1 MLP dimensions 50 50 50 BiLSTM layers 2 2 2 BiLSTM dimensions 500 300 300 Table 2: Hyperparameter settings. Middle column shows hyperparameters used for the single-task architecture, described in §4.2, and right column for the multitask architecture, described in §6. Main refers to parameters specific to the main task—UCCA parsing (task-specific MLP and BiLSTM, and edge label embedding), Aux to parameters specific to each auxiliary task (task-specific MLP, but no edge label embedding since the tasks are unlabeled), and Shared to parameters shared among all tasks (shared BiLSTM and embeddings). Hyperparameters. We initialize embeddings randomly. We use dropout (Srivastava et al., 2014) between MLP layers, and recurrent dropout (Gal and Ghahramani, 2016) between BiLSTM layers, both with p = 0.4. We also use word (α = 0.2), tag (α = 0.2) and dependency relation (α = 0.5) dropout (Kiperwasser and Goldberg, 2016).14 In addition, we use a novel form of dropout, node dropout: with a probability of 0.1 at each step, all features associated with a single node in the parser state are replaced with zero vectors. For optimization we use a minibatch size of 100, decaying all weights by 10−5 at each update, and train with stochastic gradient descent for N epochs with a learning rate of 0.1, followed by AMSGrad (Sashank J. Reddi, 2018) for N epochs with α = 0.001, β1 = 0.9 and β2 = 0.999. We use N = 50 for English and German, and N = 400 for French. We found this training strategy better than using only one of the optimization methods, 14In training, the embedding for a feature value w is replaced with a zero vector with a probability of α #(w)+α, where #(w) is the number of occurrences of w observed. 380 Primary Remote LP LR LF LP LR LF English (in-domain) HAR17 74.4 72.7 73.5 47.4 51.6 49.4 Single 74.4 72.9 73.6 53 50 51.5 AMR 74.7 72.8 73.7 48.7⋆51.1 49.9 DM 75.7⋆73.9⋆74.8⋆54.9 53 53.9 UD++ 75⋆ 73.2 74.1⋆49 52.7 50.8 AMR + DM 75.6⋆73.9⋆74.7⋆49.9 53 51.4 AMR + UD++ 74.9 72.7 73.8 47.1 50 48.5 DM + UD++ 75.9⋆73.9⋆74.9⋆48 54.8 51.2 All 75.6⋆73.1 74.4⋆50.9 53.2 52 Table 3: Labeled precision, recall and F1 (in %) for primary and remote edges, on the Wiki test set. ⋆indicates significantly better than Single. HAR17: Hershcovich et al. (2017). Primary Remote LP LR LF LP LR LF English (out-of-domain) HAR17 68.7 68.5 68.6 38.6 18.8 25.3 Single 69 69 69 41.2 19.8 26.7 AMR 69.5 69.5 69.5 42.9 20.2 27.5 DM 70.7⋆70.7⋆70.7⋆42.7 18.6 25.9 UD++ 69.6 69.8⋆69.7 41.4 22 28.7 AMR + DM 70.7⋆70.2⋆70.5⋆45.8 19.4 27.3 AMR + UD++ 70.2⋆69.9⋆70⋆ 45.1 21.8 29.4 DM + UD++ 70.8⋆70.3⋆70.6⋆41.6 21.6 28.4 All 71.2⋆70.9⋆71⋆ 45.1 22 29.6 French (in-domain) Single 68.2 67 67.6 26 9.4 13.9 UD 70.3 70⋆ 70.1⋆43.8 13.2 20.3 German (in-domain) Single 73.3 71.7 72.5 57.1 17.7 27.1 UD 73.7⋆72.6⋆73.2⋆61.8 24.9⋆35.5⋆ Table 4: Labeled precision, recall and F1 (in %) for primary and remote edges, on the 20K test sets. ⋆indicates significantly better than Single. HAR17: Hershcovich et al. (2017). similar to findings by Keskar and Socher (2017). We select the epoch with the best average labeled F1 score on the UCCA development set. Other hyperparameter settings are listed in Table 2. Evaluation. We evaluate on UCCA using labeled precision, recall and F1 on primary and remote edges, following previous work (Hershcovich et al., 2017). Edges in predicted and gold graphs are matched by terminal yield and label. Significance testing of improvements over the single-task model is done by the bootstrap test (Berg-Kirkpatrick et al., 2012), with p < 0.05. 8 Results Table 3 presents our results on the English indomain Wiki test set. MTL with all auxiliary tasks and their combinations improves the primary F1 score over the single task baseline. In most settings the improvement is statistically significant. Using all auxiliary tasks contributed less than just DM and UD++, the combination of which yielded the best scores yet in in-domain UCCA parsing, with 74.9% F1 on primary edges. Remote F1 is improved in some settings, but due to the relatively small number of remote edges (about 2% of all edges), none of the differences is significant. Note that our baseline single-task model (Single) is slightly better than the current state-of-the-art (HAR17; Hershcovich et al., 2017), due to the incorporation of additional features (see §4.2). Table 4 presents our experimental results on the 20K corpora in the three languages. For English out-of-domain, improvements from using MTL are even more marked. Moreover, the improvement is largely additive: the best model, using all three auxiliary tasks (All), yields an error reduction of 2.9%. Again, the single-task baseline is slightly better than HAR17. The contribution of MTL is also apparent in French and German in-domain parsing: 3.7% error reduction in French (having less than 10% as much UCCA training data as English) and 1% in German, where the training set is comparable in size to the English one, but is noisier (see §7). The best MTL models are significantly better than single-task models, demonstrating that even a small training set for the main task may suffice, given enough auxiliary training data (as in French). 9 Discussion Quantifying the similarity between tasks. Task similarity is an important factor in MTL success (Bingel and Søgaard, 2017; Mart´ınez Alonso and Plank, 2017). In our case, the main and auxiliary tasks are annotated on different corpora from different domains (§7), and the target representations vary both in form and in content. To quantify the domain differences, we follow Plank and van Noord (2011) and measure the L1 distance between word distributions in the English training sets and 20K test set (Table 5). All auxiliary training sets are more similar to 20K than Wiki is, which may contribute to the benefits observed on the English 20K test set. As a measure of the formal similarity of the different schemes to UCCA, we use unlabeled F1 score evaluation on both primary and remote edges (ignoring edge labels). To this end, we annotated 100 English sentences from Section 02 of the Penn Treebank Wall Street Journal (PTB WSJ). Anno381 20K AMR DM UD Wiki 1.047 0.895 0.913 0.843 20K 0.949 0.971 0.904 AMR 0.757 0.469 DM 0.754 Table 5: L1 distance between dataset word distributions, quantifying domain differences in English (low is similar). Primary Remote UP UR UF UP UR UF AMR 53.8 15.6 24.2 7.3 5.5 6.3 DM 65 49.2 56 7.4 65.9 13.3 UD++ 82.7 84.6 83.6 12.5 12.7 12.6 Table 6: Unlabeled F1 scores between the representations of the same English sentences (from PTB WSJ), converted to the unified DAG format, and annotated UCCA graphs. tation was carried out by a single expert UCCA annotator, and is publicly available.15 These sentences had already been annotated by the AMR, DM and PTB schemes,16 and we convert their annotation to the unified DAG format. Unlabeled F1 scores between the UCCA graphs and those converted from AMR, DM and UD++ are presented in Table 6. UD++ is highly overlapping with UCCA, while DM less so, and AMR even less (cf. Figure 3). Comparing the average improvements resulting from adding each of the tasks as auxiliary (see §8), we find AMR the least beneficial, UD++ second, and DM the most beneficial, in both in-domain and out-of-domain settings. This trend is weakly correlated with the formal similarity between the tasks (as expressed in Table 6), but weakly negatively correlated with the word distribution similarity scores (Table 5). We conclude that other factors should be taken into account to fully explain this effect, and propose to address this in future work through controlled experiments, where corpora of the same domain are annotated with the various formalisms and used as training data for MTL. AMR, SDP and UD parsing. Evaluating the full MTL model (All) on the unlabeled auxiliary tasks yielded 64.7% unlabeled Smatch F1 (Cai and Knight, 2013) on the AMR development set, when using oracle concept identification (since the auxiliary model does not predict node labels), 27.2% unlabeled F1 on the DM development set, and 15http://github.com/danielhers/wsj 16We convert the PTB format to UD++ v1 using Stanford CoreNLP, and then to UD v2 using Udapi: http: //github.com/udapi/udapi-python. 4.9% UAS on the UD development set. These poor results reflect the fact that model selection was based on the score on the UCCA development set, and that the model parameters dedicated to auxiliary tasks were very limited (to encourage using the shared parameters). However, preliminary experiments using our approach produced promising results on each of the tasks’ respective English development sets, when treated as a single task: 67.1% labeled Smatch F1 on AMR (adding a transition for implicit nodes and classifier for node labels), 79.1% labeled F1 on DM, and 80.1% LAS F1 on UD. For comparison, the best results on these datasets are 70.7%, 91.2% and 82.2%, respectively (Foland and Martin, 2017; Peng et al., 2018; Dozat et al., 2017). 10 Conclusion We demonstrate that semantic parsers can leverage a range of semantically and syntactically annotated data, to improve their performance. Our experiments show that MTL improves UCCA parsing, using AMR, DM and UD parsing as auxiliaries. We propose a unified DAG representation, construct protocols for converting these schemes into the unified format, and generalize a transitionbased DAG parser to support all these tasks, allowing it to be jointly trained on them. While we focus on UCCA in this work, our parser is capable of parsing any scheme that can be represented in the unified DAG format, and preliminary results on AMR, DM and UD are promising (see §9). Future work will investigate whether a single algorithm and architecture can be competitive on all of these parsing tasks, an important step towards a joint many-task model for semantic parsing. Acknowledgments This work was supported by the Israel Science Foundation (grant no. 929/17), by the HUJI Cyber Security Research Center in conjunction with the Israel National Cyber Bureau in the Prime Minister’s Office, and by the Intel Collaborative Research Institute for Computational Intelligence (ICRI-CI). The first author was supported by a fellowship from the Edmond and Lily Safra Center for Brain Sciences. We thank Roi Reichart, Rotem Dror and the anonymous reviewers for their helpful comments. 382 References Omri Abend and Ari Rappoport. 2013. Universal Conceptual Cognitive Annotation (UCCA). In Proc. of ACL, pages 228–238. Omri Abend and Ari Rappoport. 2017. The state of the art in semantic representation. In Proc. of ACL, pages 77–89. Waleed Ammar, George Mulcaire, Miguel Ballesteros, Chris Dyer, and Noah Smith. 2016. Many languages, one parser. TACL, 4:431–444. Yoav Artzi, Kenton Lee, and Luke Zettlemoyer. 2015. Broad-coverage CCG semantic parsing with AMR. In Proc. of EMNLP, pages 1699–1710. Miguel Ballesteros and Yaser Al-Onaizan. 2017. AMR parsing using stack-LSTMs. In Proc. of EMNLP, pages 1269–1275. Laura Banarescu, Claire Bonial, Shu Cai, Madalina Georgescu, Kira Griffitt, Ulf Hermjakob, Kevin Knight, Martha Palmer, and Nathan Schneider. 2013. Abstract Meaning Representation for sembanking. In Proc. of the Linguistic Annotation Workshop. Guntis Barzdins and Didzis Gosko. 2016. RIGA at SemEval-2016 task 8: Impact of Smatch extensions and character-level neural translation on AMR parsing accuracy. In Proc. of SemEval, pages 1143–1147. Eric Baucom, Levi King, and Sandra K¨ubler. 2013. Domain adaptation for parsing. In Proc. of RANLP, pages 56–64. Taylor Berg-Kirkpatrick, David Burkett, and Dan Klein. 2012. An empirical investigation of statistical significance in NLP. In Proc. of EMNLP-CoNLL, pages 995–1005. Joachim Bingel and Anders Søgaard. 2017. Identifying beneficial task relations for multi-task learning in deep neural networks. In Proc. of EACL, pages 164–169. John Blitzer, Ryan McDonald, and Fernando Pereira. 2006. Domain adaptation with structural correspondence learning. In Proc. of EMNLP, pages 120–128. Bernd Bohnet and Joakim Nivre. 2012. A transitionbased system for joint part-of-speech tagging and labeled non-projective dependency parsing. In Proc. of EMNLP-CoNLL, pages 1455–1465. Piotr Bojanowski, Edouard Grave, Armand Joulin, and Tomas Mikolov. 2017. Enriching word vectors with subword information. TACL, 5:135–146. Marcel Bollmann and Anders Søgaard. 2016. Improving historical spelling normalization with bidirectional lstms and multi-task learning. In Proc. of COLING, pages 131–139. Chlo´e Braud, Barbara Plank, and Anders Søgaard. 2016. Multi-view and multi-task training of RST discourse parsers. In Proc. of COLING, pages 1903–1913. Jan Buys and Phil Blunsom. 2017a. Oxford at SemEval-2017 task 9: Neural AMR parsing with pointer-augmented attention. In Proc. of SemEval, pages 914–919. Jan Buys and Phil Blunsom. 2017b. Robust incremental neural semantic graph parsing. In Proc. of ACL, pages 1215–1226. Shu Cai and Kevin Knight. 2013. Smatch: an evaluation metric for semantic feature structures. In Proc. of ACL, pages 748–752. Rich Caruana. 1997. Multitask Learning. Machine Learning, 28(1):41–75. Ronan Collobert, Jason Weston, L´eon Bottou, Michael Karlen, Koray Kavukcuoglu, and Pavel Kuksa. 2011. Natural language processing (almost) from scratch. J. Mach. Learn. Res., 12:2493–2537. Matthieu Constant and Joakim Nivre. 2016. A transition-based system for joint lexical and syntactic analysis. In Proc. of ACL, pages 161–171. Ann Copestake and Dan Flickinger. 2000. An open source grammar development environment and broad-coverage English grammar using HPSG. In Proc. of LREC, pages 591–600. Ann Copestake, Dan Flickinger, Carl Pollard, and Ivan A. Sag. 2005. Minimal recursion semantics: An introduction. Research on Language and Computation, 3(2):281–332. Marco Damonte, Shay B. Cohen, and Giorgio Satta. 2017. An incremental parser for Abstract Meaning Representation. In Proc. of EACL. Hal Daume III. 2007. Frustratingly easy domain adaptation. In Proc. of ACL, pages 256–263. Timothy Dozat, Peng Qi, and Christopher D. Manning. 2017. Stanford’s graph-based neural dependency parser at the conll 2017 shared task. In Proc. of CoNLL, pages 20–30. Yantao Du, Fan Zhang, Xun Zhang, Weiwei Sun, and Xiaojun Wan. 2015. Peking: Building semantic dependency graphs with a hybrid parser. In Proc. of SemEval, pages 927–931. Long Duong, Hadi Afshar, Dominique Estival, Glen Pink, Philip Cohen, and Mark Johnson. 2017. Multilingual semantic parsing and code-switching. In Proc. of CoNLL, pages 379–389. Xing Fan, Emilio Monti, Lambert Mathias, and Markus Dreyer. 2017. Transfer learning for neural semantic parsing. In Proc. of Workshop on Representation Learning for NLP, pages 48–56. 383 Jenny Rose Finkel and Christopher D. Manning. 2009. Joint parsing and named entity recognition. In Proc. of NAACL-HLT, pages 326–334. Jeffrey Flanigan, Sam Thomson, Jaime Carbonell, Chris Dyer, and Noah A. Smith. 2014. A discriminative graph-based parser for the Abstract Meaning Representation. In Proc. of ACL, pages 1426–1436. Daniel Flickinger, Yi Zhang, and Valia Kordoni. 2012. DeepBank: A dynamically annotated treebank of the Wall Street Journal. In Proc. of Workshop on Treebanks and Linguistic Theories, pages 85–96. William Foland and James H. Martin. 2017. Abstract Meaning Representation parsing using LSTM recurrent neural networks. In Proc. of ACL, pages 463–472. Yarin Gal and Zoubin Ghahramani. 2016. A Theoretically Grounded Application of Dropout in Recurrent Neural Networks. In D D Lee, M Sugiyama, U V Luxburg, I Guyon, and R Garnett, editors, Advances in Neural Information Processing Systems 29, pages 1019–1027. Curran Associates, Inc. Daniel Gildea and Daniel Jurafsky. 2002. Automatic labeling of semantic roles. Computational Linguistics, 28(3). James Goodman, Andreas Vlachos, and Jason Naradowsky. 2016. Noise reduction and targeted exploration in imitation learning for Abstract Meaning Representation parsing. In Proc. of ACL, pages 1–11. Jiang Guo, Wanxiang Che, Haifeng Wang, and Ting Liu. 2016. Exploiting multi-typed treebanks for parsing with deep multi-task learning. CoRR, abs/1606.01161. Jan Hajiˇc, Massimiliano Ciaramita, Richard Johansson, Daisuke Kawahara, Maria Ant`onia Mart´ı, Llu´ıs M`arquez, Adam Meyers, Joakim Nivre, Sebastian Pad´o, Jan ˇStep´anek, Pavel Straˇn´ak, Mihai Surdeanu, Nianwen Xue, and Yi Zhang. 2009. The CoNLL2009 shared task: Syntactic and semantic dependencies in multiple languages. In Proc. of CoNLL, pages 1–18. Kazuma Hashimoto, caiming xiong, Yoshimasa Tsuruoka, and Richard Socher. 2017. A joint many-task model: Growing a neural network for multiple NLP tasks. In Proc. of EMNLP, pages 1923–1933. James Henderson, Paola Merlo, Ivan Titov, and Gabriele Musillo. 2013. Multilingual joint parsing of syntactic and semantic dependencies with a latent variable model. Computational Linguistics, 39(4):949–998. Daniel Hershcovich, Omri Abend, and Ari Rappoport. 2017. A transition-based directed acyclic graph parser for UCCA. In Proc. of ACL, pages 1127–1138. Jonathan Herzig and Jonathan Berant. 2017. Neural semantic parsing over multiple knowledge-bases. In Proc. of ACL, pages 623–628. Matthew Honnibal and Ines Montani. 2018. spaCy 2: Natural language understanding with Bloom embeddings, convolutional neural networks and incremental parsing. To appear. Nitish Shirish Keskar and Richard Socher. 2017. Improving generalization performance by switching from Adam to SGD. CoRR, abs/1712.07628. Eliyahu Kiperwasser and Yoav Goldberg. 2016. Simple and accurate dependency parsing using bidirectional LSTM feature representations. TACL, 4:313–327. Sigrid Klerke, Yoav Goldberg, and Anders Søgaard. 2016. Improving sentence compression by learning to predict gaze. In Proc. of NAACL-HLT, pages 1528–1533. Ioannis Konstas, Srinivasan Iyer, Mark Yatskar, Yejin Choi, and Luke Zettlemoyer. 2017. Neural AMR: Sequence-to-sequence models for parsing and generation. In Proc. of ACL, pages 146–157. Mike Lewis, Luheng He, and Luke Zettlemoyer. 2015. Joint A* CCG parsing and semantic role labelling. In Proc. of EMNLP, pages 1444–1454. Xavier Llu´ıs and Llu´ıs M`arquez. 2008. A joint model for parsing syntactic and semantic dependencies. In Proc. of CoNLL, pages 188–192. Wolfgang Maier and Timm Lichte. 2016. Discontinuous parsing with continuous trees. In Proc. of Workshop on Discontinuous Structures in NLP, pages 47–57. H´ector Mart´ınez Alonso and Barbara Plank. 2017. When is multitask learning effective? Semantic sequence prediction under varying data conditions. In Proc. of EACL, pages 44–53. Jonathan May. 2016. SemEval-2016 task 8: Meaning representation parsing. In Proc. of SemEval, pages 1063–1073. Jonathan May and Jay Priyadarshi. 2017. SemEval2017 task 9: Abstract Meaning Representation parsing and generation. In Proc. of SemEval, pages 536–545. David McClosky, Eugene Charniak, and Mark Johnson. 2010. Automatic domain adaptation for parsing. In Proc. of NAACL-HLT, pages 28–36. Amir More. 2016. Joint morpho-syntactic processing of morphologically rich languages in a transitionbased framework. Master’s thesis, The Interdisciplinary Center, Herzliya. 384 Graham Neubig, Chris Dyer, Yoav Goldberg, Austin Matthews, Waleed Ammar, Antonios Anastasopoulos, Miguel Ballesteros, David Chiang, Daniel Clothiaux, Trevor Cohn, Kevin Duh, Manaal Faruqui, Cynthia Gan, Dan Garrette, Yangfeng Ji, Lingpeng Kong, Adhiguna Kuncoro, Gaurav Kumar, Chaitanya Malaviya, Paul Michel, Yusuke Oda, Matthew Richardson, Naomi Saphra, Swabha Swayamdipta, and Pengcheng Yin. 2017. DyNet: The dynamic neural network toolkit. CoRR, abs/1701.03980. Joakim Nivre. 2003. An efficient algorithm for projective dependency parsing. In Proc. of IWPT, pages 149–160. Joakim Nivre, ˇZeljko Agi´c, Lars Ahrenberg, Lene Antonsen, Maria Jesus Aranzabe, Masayuki Asahara, Luma Ateyah, Mohammed Attia, Aitziber Atutxa, Liesbeth Augustinus, Elena Badmaeva, Miguel Ballesteros, Esha Banerjee, Sebastian Bank, Verginica Barbu Mititelu, John Bauer, Kepa Bengoetxea, Riyaz Ahmad Bhat, Eckhard Bick, Victoria Bobicev, Carl B¨orstell, Cristina Bosco, Gosse Bouma, Sam Bowman, Aljoscha Burchardt, Marie Candito, Gauthier Caron, G¨uls¸en Cebirolu Eryiit, Giuseppe G. A. Celano, Savas Cetin, Fabricio Chalub, Jinho Choi, Silvie Cinkov´a, C¸ ar C¸ ¨oltekin, Miriam Connor, Elizabeth Davidson, Marie-Catherine de Marneffe, Valeria de Paiva, Arantza Diaz de Ilarraza, Peter Dirix, Kaja Dobrovoljc, Timothy Dozat, Kira Droganova, Puneet Dwivedi, Marhaba Eli, Ali Elkahky, Tomaˇz Erjavec, Rich´ard Farkas, Hector Fernandez Alcalde, Jennifer Foster, Cl´audia Freitas, Katar´ına Gajdoˇsov´a, Daniel Galbraith, Marcos Garcia, Moa G¨ardenfors, Kim Gerdes, Filip Ginter, Iakes Goenaga, Koldo Gojenola, Memduh G¨okrmak, Yoav Goldberg, Xavier G´omez Guinovart, Berta Gonz´ales Saavedra, Matias Grioni, Normunds Gr¯uz¯itis, Bruno Guillaume, Nizar Habash, Jan Hajiˇc, Jan Hajiˇc jr., Linh H`a M, Kim Harris, Dag Haug, Barbora Hladk´a, Jaroslava Hlav´aˇcov´a, Florinel Hociung, Petter Hohle, Radu Ion, Elena Irimia, Tom´aˇs Jel´ınek, Anders Johannsen, Fredrik Jørgensen, H¨uner Kas¸kara, Hiroshi Kanayama, Jenna Kanerva, Tolga Kayadelen, V´aclava Kettnerov´a, Jesse Kirchner, Natalia Kotsyba, Simon Krek, Veronika Laippala, Lorenzo Lambertino, Tatiana Lando, John Lee, Phng Lˆe Hng, Alessandro Lenci, Saran Lertpradit, Herman Leung, Cheuk Ying Li, Josie Li, Keying Li, Nikola Ljubeˇsi´c, Olga Loginova, Olga Lyashevskaya, Teresa Lynn, Vivien Macketanz, Aibek Makazhanov, Michael Mandl, Christopher Manning, C˘at˘alina M˘ar˘anduc, David Mareˇcek, Katrin Marheinecke, H´ector Mart´ınez Alonso, Andr´e Martins, Jan Maˇsek, Yuji Matsumoto, Ryan McDonald, Gustavo Mendonc¸a, Niko Miekka, Anna Missil¨a, C˘at˘alin Mititelu, Yusuke Miyao, Simonetta Montemagni, Amir More, Laura Moreno Romero, Shinsuke Mori, Bohdan Moskalevskyi, Kadri Muischnek, Kaili M¨u¨urisep, Pinkey Nainwani, Anna Nedoluzhko, Gunta Neˇspore-B¯erzkalne, Lng Nguyn Th, Huyn Nguyn Th Minh, Vitaly Nikolaev, Hanna Nurmi, Stina Ojala, Petya Osenova, Robert ¨Ostling, Lilja Øvrelid, Elena Pascual, Marco Passarotti, Cenel-Augusto Perez, Guy Perrier, Slav Petrov, Jussi Piitulainen, Emily Pitler, Barbara Plank, Martin Popel, Lauma Pretkalnia, Prokopis Prokopidis, Tiina Puolakainen, Sampo Pyysalo, Alexandre Rademaker, Loganathan Ramasamy, Taraka Rama, Vinit Ravishankar, Livy Real, Siva Reddy, Georg Rehm, Larissa Rinaldi, Laura Rituma, Mykhailo Romanenko, Rudolf Rosa, Davide Rovati, Benoˆıt Sagot, Shadi Saleh, Tanja Samardˇzi´c, Manuela Sanguinetti, Baiba Saul¯ite, Sebastian Schuster, Djam´e Seddah, Wolfgang Seeker, Mojgan Seraji, Mo Shen, Atsuko Shimada, Dmitry Sichinava, Natalia Silveira, Maria Simi, Radu Simionescu, Katalin Simk´o, M´aria ˇSimkov´a, Kiril Simov, Aaron Smith, Antonio Stella, Milan Straka, Jana Strnadov´a, Alane Suhr, Umut Sulubacak, Zsolt Sz´ant´o, Dima Taji, Takaaki Tanaka, Trond Trosterud, Anna Trukhina, Reut Tsarfaty, Francis Tyers, Sumire Uematsu, Zdeˇnka Ureˇsov´a, Larraitz Uria, Hans Uszkoreit, Sowmya Vajjala, Daniel van Niekerk, Gertjan van Noord, Viktor Varga, Eric Villemonte de la Clergerie, Veronika Vincze, Lars Wallin, Jonathan North Washington, Mats Wir´en, Tak-sum Wong, Zhuoran Yu, Zdenˇek ˇZabokrtsk´y, Amir Zeldes, Daniel Zeman, and Hanzhi Zhu. 2017. Universal dependencies 2.1. LINDAT/CLARIN digital library at the Institute of Formal and Applied Linguistics ( ´UFAL), Faculty of Mathematics and Physics, Charles University. Joakim Nivre, Marie-Catherine de Marneffe, Filip Ginter, Yoav Goldberg, Jan Hajic, Christopher D. Manning, Ryan McDonald, Slav Petrov, Sampo Pyysalo, Natalia Silveira, Reut Tsarfaty, and Daniel Zeman. 2016. Universal dependencies v1: A multilingual treebank collection. In Proc. of LREC. Stephan Oepen, Marco Kuhlmann, Yusuke Miyao, Daniel Zeman, Silvie Cinkova, Dan Flickinger, Jan Hajic, Angelina Ivanova, and Zdenka Uresova. 2016. Towards comparability of linguistic graph banks for semantic parsing. In Proc. of LREC. Stephan Oepen, Marco Kuhlmann, Yusuke Miyao, Daniel Zeman, Silvie Cinkov´a, Dan Flickinger, Jan Hajiˇc, and Zdeˇnka Ureˇsov´a. 2015. SemEval 2015 task 18: Broad-coverage semantic dependency parsing. In Proc. of SemEval, pages 915–926. Stephan Oepen, Marco Kuhlmann, Yusuke Miyao, Daniel Zeman, Dan Flickinger, Jan Hajiˇc, Angelina Ivanova, and Yi Zhang. 2014. SemEval 2014 task 8: Broad-coverage semantic dependency parsing. In Proc. of SemEval, pages 63–72. Martha Palmer, Daniel Gildea, and Paul Kingsbury. 2005. The proposition bank: An annotated corpus of semantic roles. Computational Linguistics, 31(1). Hao Peng, Sam Thomson, and Noah A. Smith. 2017a. Deep multitask learning for semantic dependency parsing. In Proc. of ACL, pages 2037–2048. 385 Hao Peng, Sam Thomson, Swabha Swayamdipta, and Noah A. Smith. 2018. Learning joint semantic parsers from disjoint data. In Proc. of NAACL-HLT. Xiaochang Peng, Chuan Wang, Daniel Gildea, and Nianwen Xue. 2017b. Addressing the data sparsity issue in neural AMR parsing. In Proc. of EACL, pages 366–375. Barbara Plank. 2016. Keystroke dynamics as signal for shallow syntactic parsing. In Proc. of COLING, pages 609–619. Barbara Plank and Gertjan van Noord. 2011. Effective measures of domain similarity for parsing. In Proc. of ACL-HLT, pages 1566–1576. Carl Pollard and Ivan A Sag. 1994. Head-driven phrase structure grammar. University of Chicago Press. Michael Pust, Ulf Hermjakob, Kevin Knight, Daniel Marcu, and Jonathan May. 2015. Parsing English into Abstract Meaning Representation using syntaxbased machine translation. In Proc. of EMNLP, pages 1143–1154. Kenji Sagae and Jun’ichi Tsujii. 2008. Shift-reduce dependency DAG parsing. In Proc. of COLING, pages 753–760. Sanjiv Kumar Sashank J. Reddi, Satyen Kale. 2018. On the convergence of Adam and beyond. ICLR. Sebastian Schuster and Christopher D. Manning. 2016. Enhanced English Universal Dependencies: An improved representation for natural language understanding tasks. In Proc. of LREC. ELRA. Anders Søgaard and Yoav Goldberg. 2016. Deep multi-task learning with low level tasks supervised at lower layers. In Proc. of ACL, pages 231–235. Nitish Srivastava, Geoffrey Hinton, Alex Krizhevsky, Ilya Sutskever, and Ruslan Salakhutdinov. 2014. Dropout: A simple way to prevent neural networks from overfitting. Journal of Machine Learning Research, 15:1929–1958. Elior Sulem, Omri Abend, and Ari Rappoport. 2015. Conceptual annotations preserve structure across translations: A French-English case study. In Proc. of S2MT, pages 11–22. Mihai Surdeanu, Richard Johansson, Adam Meyers, Llu´ıs M`arquez, and Joakim Nivre. 2008. The CoNLL 2008 shared task on joint parsing of syntactic and semantic dependencies. In Proc. of CoNLL, pages 159–177. Swabha Swayamdipta, Miguel Ballesteros, Chris Dyer, and Noah A. Smith. 2016. Greedy, joint syntacticsemantic parsing with stack LSTMs. In Proc. of CoNLL, pages 187–197. Swabha Swayamdipta, Sam Thomson, Chris Dyer, and Noah A. Smith. 2017. Frame-semantic parsing with softmax-margin segmental rnns and a syntactic scaffold. CoRR, abs/1706.09528. Kristina Toutanova, Aria Haghighi, and Christopher Manning. 2005. Joint learning improves semantic role labeling. In Proc. of ACL, pages 589–596. Chuan Wang, Sameer Pradhan, Xiaoman Pan, Heng Ji, and Nianwen Xue. 2016. CAMR at SemEval-2016 task 8: An extended transition-based AMR parser. In Proc. of SemEval, pages 1173–1178. Chuan Wang, Nianwen Xue, and Sameer Pradhan. 2015a. Boosting transition-based AMR parsing with refined actions and auxiliary analyzers. In Proc. of ACL, pages 857–862. Chuan Wang, Nianwen Xue, and Sameer Pradhan. 2015b. A transition-based algorithm for AMR parsing. In Proc. of NAACL, pages 366–375. Yuan Zhang and David Weiss. 2016. Stackpropagation: Improved representation learning for syntax. In Proc. of ACL, pages 1557–1566. Yue Zhang and Stephen Clark. 2009. Transitionbased parsing of the Chinese treebank using a global discriminative model. In Proc. of IWPT, pages 162–171. Junsheng Zhou, Feiyu Xu, Hans Uszkoreit, Weiguang Qu, Ran Li, and Yanhui Gu. 2016. AMR parsing with an incremental joint model. In Proc. of EMNLP, pages 680–689. Yftah Ziser and Roi Reichart. 2017. Neural structural correspondence learning for domain adaptation. In Proc. of CoNLL, pages 400–410.
2018
35
Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Long Papers), pages 386–396 Melbourne, Australia, July 15 - 20, 2018. c⃝2018 Association for Computational Linguistics 386 Character-Level Models versus Morphology in Semantic Role Labeling G¨ozde G¨ul S¸ahin Department of Computer Science Technische Universit¨at Darmstadt Darmstadt, Germany [email protected] Mark Steedman School of Informatics University of Edinburgh Edinburgh, Scotland [email protected] Abstract Character-level models have become a popular approach specially for their accessibility and ability to handle unseen data. However, little is known on their ability to reveal the underlying morphological structure of a word, which is a crucial skill for high-level semantic analysis tasks, such as semantic role labeling (SRL). In this work, we train various types of SRL models that use word, character and morphology level information and analyze how performance of characters compare to words and morphology for several languages. We conduct an in-depth error analysis for each morphological typology and analyze the strengths and limitations of character-level models that relate to out-of-domain data, training data size, long range dependencies and model complexity. Our exhaustive analyses shed light on important characteristics of character-level models and their semantic capability. 1 Introduction Encoding of words is perhaps the most important step towards a successful end-to-end natural language processing application. Although word embeddings have been shown to provide benefit to such models, they commonly treat words as the smallest meaning bearing unit and assume that each word type has its own vector representation. This assumption has two major shortcomings especially for languages with rich morphology: (1) inability to handle unseen or out-ofvocabulary (OOV) word-forms (2) inability to exploit the regularities among word parts. The limitations of word embeddings are particularly pronounced in sentence-level semantic tasks, especially in languages where word parts play a crucial role. Consider the Turkish sentences “K¨oy+l¨u-ler (villagers) s¸ehr+e (to town) geldi (came)” and “Sendika+lı-lar (union members) meclis+e (to council) geldi (came)”. Here the stems k¨oy (village) and sendika (union) function similarly in semantic terms with respect to the verb come (as the origin of the agents of the verb), where s¸ehir (town) and meclis (council) both function as the end point. These semantic similarities are determined by the common word parts shown in bold. However ortographic similarity does not always correspond to semantic similarity. For instance the ortographically similar words knight and night have large semantic differences. Therefore, for a successful semantic application, the model should be able to capture both the regularities, i.e, morphological tags and the irregularities, i.e, lemmas of the word. Morphological analysis already provides the aforementioned information about the words. However access to useful morphological features may be problematic due to software licensing issues, lack of robust morphological analyzers and high ambiguity among analyses. Characterlevel models (CLM), being a cheaper and accessible alternative to morphology, have been reported as performing competitively on various NLP tasks (Ling et al., 2015; Plank et al., 2016; Lee et al., 2017). However the extent to which these tasks depend on morphology is small; and their relation to semantics is weak. Hence, little is known on their true ability to reveal the underlying morphological structure of a word and their semantic capabilities. Furthermore, their behaviour across languages from different families; and their limitations and strengths such as handling of longrange dependencies, reaction to model complexity or performance on out-of-domain data are unknown. Analyzing such issues is a key to fully 387 understanding the character-level models. To achieve this, we perform a case study on semantic role labeling (SRL), a sentencelevel semantic analysis task that aims to identify predicate-argument structures and assign meaningful labels to them as follows: [Villagers]comers came [to town]end point We use a simple method based on bidirectional LSTMs to train three types of base semantic role labelers that employ (1) words (2) characters and character sequences and (3) gold morphological analysis. The gold morphology serves as the upper bound for us to compare and analyze the performances of character-level models on languages of varying morphological typologies. We carry out an exhaustive error analysis for each language type and analyze the strengths and limitations of character-level models compared to morphology. In regard to the diversity hypothesis which states that diversity of systems in ensembles lead to further improvement, we combine character and morphology-level models and measure the performance of the ensemble to better understand how similar they are. We experiment with several languages with varying degrees of morphological richness and typology: Turkish, Finnish, Czech, German, Spanish, Catalan and English. Our experiments and analysis reveal insights such as: • CLMs provide great improvements over whole-word-level models despite not being able to match the performance of morphology-level models (MLMs) for indomain datasets. However their performance surpass all MLMs on out-of-domain data, • Limitations and strengths differ by morphological typology. Their limitations for agglutinative languages are related to rich derivational morphology and high contextual ambiguity; whereas for fusional languages they are related to number of morphological tags (morpheme ambiguity) , • CLMs can handle long-range dependencies equally well as MLMs, • In presence of more training data, CLM’s performance is expected to improve faster than of MLM. 2 Related Work Neural SRL Methods: Neural networks have been first introduced to the SRL scene by Collobert et al. (2011), where they use a unified end-to-end convolutional network to perform various NLP tasks. Later, the combination of neural networks (LSTMs in particular) with traditional SRL features (categorical and binary) has been introduced (FitzGerald et al., 2015). Recently, it has been shown that careful design and tuning of deep models can achieve state-of-the-art with no or minimal syntactic knowledge for English and Chinese SRL. Although the architectures vary slightly, they are mostly based on a variation of bi-LSTMs. Zhou and Xu (2015); He et al. (2017) connect the layers of LSTM in an interleaving pattern where in (Wang et al., 2015; Marcheggiani et al., 2017) regular bi-LSTM layers are used. Commonly used features for the encoding layer are: pretrained word embeddings; distance from the predicate; predicate context; predicate region mark or flag; POS tag; and predicate lemma embedding. Only a few of the models (Marcheggiani et al., 2017; Marcheggiani and Titov, 2017) perform dependency-based SRL. Furthermore, all methods focus on languages with rich resources and less morphological complexity like English and Chinese. Character-level Models: Character-level models have proven themselves useful for many NLP tasks such as language modeling (Ling et al., 2015; Kim et al., 2016), POS tagging (Santos and Zadrozny, 2014; Plank et al., 2016), dependency parsing (Dozat et al., 2017) and machine translation (Lee et al., 2017). However the number of comparative studies that analyze their relation to morphology are rather limited. Recently, Vania and Lopez (2017) presented a unified framework, where they investigated the performances of different subword units, namely characters, morphemes and morphological analysis on language modeling task. They experimented with languages of varying morphological typologies and concluded that the performance of character models can not yet match the morphological models, albeit very close. Similarly, Belinkov et al. (2017) analyzed how different word representations help learn better morphology and model rare words on a neural MT task and concluded that characterbased representations are much better for learning 388 morphology. 3 Method Formally, we generate a label sequence ⃗l for each sentence and predicate pair: (s, p). Each lt ∈⃗l is chosen from L = {roles ∪nonrole}, where roles are language-specific semantic roles (mostly consistent with PropBank) and nonrole is a symbol to present tokens that are not arguments. Given θ as model parameters and gt as gold label for tth token, we find the parameters that minimize the negative log likelihood of the sequence: ˆθ = arg min θ − n X t=1 log(p(gt|θ, s, p)) ! (1) Label probabilities, p(lt|θ, s, p), are calculated with equations given below.First, the word encoding layer splits tokens into subwords via ρ function. ρ(w) = s0, s1, .., sn (2) As proposed by Ling et al. (2015), we treat words as a sequence of subword units. Then, the sequence is fed to a simple bi-LSTM network (Graves and Schmidhuber, 2005; Gers et al., 2000) and hidden states from each direction are weighted with a set of parameters which are also learned during training. Finally, the weighted vector is used as the word embedding given in Eq. 4. hsf, hsb = bi-LSTM(s0, s1, .., sn) (3) ⃗w = Wf · hsf + Wb · hsb + b (4) There may be more than one predicate in the sentence so it is crucial to inform the network of which arguments we aim to label. In order to mark the predicate of interest, we concatenate a predicate flag pft to the word embedding vector. ⃗xt = [⃗w; pft] (5) Final vector, ⃗xt serves as an input to another biLSTM unit. ⃗ hf, hb = bi-LSTM(xt) (6) Finally, the label distribution is calculated via softmax function over the concatenated hidden states from both directions. ⃗ p(lt|s, p) = softmax(Wl · [ ⃗hf; ⃗hb] + ⃗bl) (7) For simplicity, we assign the label with the highest probability to the input token. 1. 3.1 Subword Units We use three types of units: (1) words (2) characters and character sequences and (3) outputs of morphological analysis. Words serve as a lower bound; while morphology is used as an upper bound for comparison. Table 1 shows sample outputs of various ρ functions. Here, char function ρ word output char available <-a-v-a-i-l-a-b-l-e-> char3 available <av-ava-vai-ail-ila-lab-abl-ble-le> morph-DEU pr¨achtiger [pr¨achtig;Pos;Nom;Sg;Masc] morph-SPA las [el;postype=article;gen=f;num=p] morph-CAT la [el;postype=article;gen=f;num=s] morph-TUR boyundaki [boy;NOUN;A3sg;P3sg;Loc;DB;ADJ] morph-FIN tyhjyytt¨a [tyhjyys;Case=Par;Number=Sing] morph-CZE si [se;SubPOS=7;Num=X;Cas=3] Table 1: Sample outputs of different ρ functions simply splits the token into its characters. Similar to n-gram language models, char3 slides a character window of width n = 3 over the token. Finally, gold morphological features are used as outputs of morph-language. Throughout this paper, we use morph and oracle interchangably, i.e., morphology-level models (MLM) have access to gold tags unless otherwise is stated. For all languages, morph outputs the lemma of the token followed by language specific morphological tags. As an exception, it outputs additional information for some languages, such as parts-of-speech tags for Turkish. Word segmenters such as Morfessor and Byte Pair Encoding (BPE) are other commonly used subword units. Due to low scores obtained from our preliminary experiments and unsatisfactory results from previous studies (Vania and Lopez, 2017), we excluded these units. 4 Experiments We use the datasets distributed by LDC for Catalan (CAT), Spanish (SPA), German (DEU), Czech (CZE) and English (ENG) (Hajiˇc et al., 2012b,a); and datasets made available by Haverinen et al. (2015); S¸ahin and Adalı (2017) for Finnish (FIN) and Turkish (TUR) respectively 2. Datasets are 1Our implementation can be found at https:// github.com/gozdesahin/Subword_Semantic_ Role_Labeling 2Turkish PropBank is based on previous efforts (Atalay et al., 2003; Sulubacak et al., 2016; Sulubacak and Eryi˘git, 2018; Oflazer et al., 2003; S¸ahin, 2016b,a) 389 #sent #token #pred #role type CZE 39K 653K 414K 51 F ENG 39K 958K 179K 38 F DEU 36K 649K 17K 9 F SPA 14K 419K 44K 34 F CAT 13K 384K 37K 35 F FIN 12K 163K 27K 20 A TUR 4K 39K 8K 26 A Table 2: Training data statistics. A: Agglutinative, F: Fusional provided with syntactic dependency annotations and semantic roles of verbal predicates. In addition, English supplies nominal predicates annotated with semantic roles and does not provide any morphological feature. Statistics for the training split for all languages are given in Table 2. Here, #pred is number of predicates, and #role refers to number distinct semantic roles that occur more than 10 times. More detailed statistics about the datasets can be found in Hajiˇc et al. (2009); Haverinen et al. (2015); S¸ahin and Adalı (2017). 4.1 Experimental Setup To fit the requirements of the SRL task and of our model, we performed the following: Spanish, Catalan: Multiword expressions (MWE) are represented as a single token, (e.g., Confederaci´on Francesa del Trabajo), that causes notably long character sequences which are hard to handle by LSTMs. For the sake of memory efficiency and performance, we used an abbreviation (e.g., CFdT) for each MWE during training and testing. Finnish: Original dataset defines its own format of semantic annotation, such as 17:PBArgM mod|19:PBArgM mod meaning the node is an argument of 17th and 19th tokens with ArgM-mod (temporary modifier) semantic role. They have been converted into CoNLL-09 tabular format, where each predicate’s arguments are given in a specific column. Turkish: Words are splitted from derivational boundaries in the original dataset, where each inflectional group is represented as a separate token. We first merge boundaries of the same word, i.e, tokens of the word, then we use our own ρ function to split words into subwords. Training and Evaluation: We lowercase all tokens beforehand and place special start and end of the token characters. For all experiments, we initialized weight parameters orthogonally and used one layer bi-LSTMs both for subword composition and argument labeling with hidden size of 200. Subword embedding size is chosen as 200. We used gradient clipping and early stopping to prevent overfitting. Stochastic gradient descent is used as the optimizer. The initial learning rate is set to 1 and reduced by half if scores on development set do not improve after 3 epochs. We use the provided splits and evaluate the results with the official evaluation script provided by CoNLL09 shared task. In this work (and in most of the recent SRL works), only the scores for argument labeling are reported, which may cause confusions for the readers while comparing with older SRL studies. Most of the early SRL work report combined scores (argument labeling with predicate sense disambiguation (PSD)). However, PSD is considered a simpler task with higher F1 scores 3. Therefore, we believe omitting PSD helps us gain more useful insights on character level models. 5 Results and Analysis Our main results on test and development sets for models that use words, characters (char), character trigrams (char3) and morphological analyses (morph) are given in Table 3. We calculate improvement over word (IOW) for each subword model and improvement over the best character model (IOC) for the morph. IOW and IOC values are calculated on the test set. The biggest improvement over the word baseline is achieved by the models that have access to morphology for all languages (except for English) as expected. Character trigrams consistently outperformed characters by a small margin. Same pattern is observed on the results of the development set. IOW has the values between 0% to 38% while IOC values range between 2%-10% dependending on the properties of the language and the dataset. We analyze the results separately for agglutinative and fusional languages and reveal the links between certain linguistic phenomena and the IOC, IOW values. 3For instance in English CoNLL-09 dataset, 87% of the predicates are annotated with their first sense, hence even a dummy classifier would achieve 87% accuracy. The best system from CoNLL-09 shared task reports 85.63 F1 on English evaluation dataset, however when the results of PSD are discarded, it drops down to 81. 390 (a) Finnish - Contextual ambiguity (b) Turkish - Derivational morphology Figure 1: Differences in model performances on agglutinative languages word char char3 morph F1 F1 IOW% F1 IOW% F1 IOW% IOC% FIN 48.91 67.24 37.46 67.78 38.58 71.15 45.47 4.97 51.65 66.82 67.08 71.88 TUR 44.82 55.89 24.68 56.60 26.28 59.38 32.48 4.91 43.14 54.48 55.41 58.91 SPA 64.30 67.90 5.61 68.43 6.42 69.39 7.92 2.25 64.53 67.64 67.64 69.17 CAT 65.45 70.56 7.82 71.34 9.00 73.24 11.90 2.66 65.67 70.43 70.48 72.36 CZE 63.58 74.04 16.45 74.98 17.93 80.66 26.87 7.58 72.69 74.58 75.59 81.06 DEU 54.78 63.71 16.29 65.56 19.68 69.35 26.58 5.77 53.76 62.75 63.70 72.18 ENG 81.19 81.61 0.52 80.65 -0.67 78.67 79.22 78.85 Table 3: F1 scores of word, character, character trigram and morphology models for argument labeling. Best F1 for each language is shown in bold. First row: results on test, Second row: results on development. Agglutinative languages have many morphemes attached to a word like beads on a string. This leads to high number of OOV words and cause word lookup models to fail. Hence, the highest IOWs by character models are achieved on these languages: Finnish and Turkish. This language family has one-to-one morpheme to meaning mapping with small orthographic differences (e.g., mıs¸, mis¸, mus¸, m¨us¸ for past perfect tense), that can be easily extracted from the data. Even though each morpheme has only one interpretation, each word (consisting of many morphemes) has usually more than one. For instance two possible analyses for the Turkish word “dolar” are (1) “dol+Verb+Positive+Aorist+3sg” (it fills), (2) “dola+Verb+Positive+Aorist+3sg” (he/she wraps). For a syntactic task, models are not obliged to learn the difference between the two; whereas for a semantic task like SRL, they are. We will refer to this issue as contextual ambiguity. Another important linguistic issue for agglutinative languages is the complex interaction between morphology and syntax, which is usually achieved via derivational morphemes. In other words, unlike inflectional morphemes that only give information on word-level semantics, derivational morphemes provide more clues on sentence-level semantics. The effects of these two phenomena on model performances is shown in Fig. 1. Scores given in Fig. 1 are absolute F1 scores for each model. For the analysis in Fig. 1a, we separately calculated F1 scores of each model on words that have been observed with at least two different set of morphological features (ambiguous), and one set of features (non-ambiguous). Due to the low number of ambiguous words in Turkish dataset (≤100), it has been calculated for Finnish only. Similarly, for the derivational morphology analysis in Fig. 1b, we have separately calculated scores for sentences containing derived words (derivational), and simple sentences without any derivations. Both analyses show that access to gold morphological tags (oracle) provided big performance gains on arguments with contextual ambiguity and sentences with derived words. Moderate IOC signals that char and char3 learns to imitate the “beads” and their “predictable order” on the string (in the absence of the aforementioned issues). 391 Figure 2: x axis: Number of morphological features; y axis: Targeted F1 scores Fusional languages may have many morphemes in a word. Spanish and Catalan have relatively low morpheme per word ratio that results with low OOV% (5.63 and 5.40 for Spanish and Catalan respectively); whereas, German and Czech have OOV% of 7.93 and 7.98 (Hajiˇc et al., 2009). We observe that IOW by character models are well aligned with OOV percentages of the datasets. Unlike agglutinative languages, single morpheme can serve multiple purposes in fusional languages. For instance, “o” (e.g., habl-o) may signal 1st person singular present tense, or 3rd person singular past tense. We count the number of surface forms with at least two different features and use their ratio (#ambiguous forms/#total forms) as a proxy to morphological complexity of the language. The complexities are approximated as 22%, 16%, 15% for Czech, Spanish and Catalan respectively; which are aligned with the observed IOCs. Since there is no unique morpheme to meaning mapping, generally multiple morphological tags are used to resolve the morpheme ambiguity. Therefore there is an indirect relation between the number of morphological tags used and the ambiguity of the word. To demonstrate this phenomena, we calculate targeted F1 scores on arguments with varying number of morphological features. Results using feature bins of [1-2], [3-4] and [5-6] are given in Fig. 2. As the number of features increase, the performance gap between oracle and character models grows dramatically for Czech and Spanish, while it stays almost fixed for Finnish. This finding suggests that high number of morphological tags signal the vagueness/complex cases in fusional languages where character models struggle; and also shows that the complexity can not be directly explained by number of morphological tags for agglutinative languages. German is known for having many compound words and compound lemmas that lead to high OOV% for lemma; and also is less ambiguous (9%). Therefore we would expect a lower IOC. However, the evaluation set consists only of 550 predicates and 1073 arguments, hence small changes in prediction lead to dramatic percentage changes. 5.1 Similarity between models One way to infer similarity is to measure diversity. Consider a set of baseline models that are not diverse, i.e., making similar errors with similar inputs. In such a case, combination of these models would not be able to overcome the biases of the learners, hence the combination would not achieve a better result. In order to test if character and morphological models are similar, we combine them and measure the performance of the ensemble. Suppose that a prediction pi is generated for each token by a model mi, i ∈n, then the final prediction is calculated from these predictions by: pfinal = f(p0, p1, .., pn|φ) (8) where f is the combining function with parameter φ. The simplest global approach is averaging (AVG), where f is simply the mean function and pis are the log probabilities. Mean function combines model outputs linearly, therefore ignores the nonlinear relation between base models/units. In order to exploit nonlinear connections, we learn the parameters φ of f via a simple linear layer followed by sigmoid activation. In other words, we train a new model that learns how to best combine the predictions from subword models. This ensemble technique is generally referred to as stacking or stacked generalization (SG). 4 Although not guaranteed, diverse models can be achieved by altering the input representation, 4To train the SG model, we have used one linear layer with 64 hidden units followed by sigmoid nonlinear activation. Weights are orthogonally initialized and optimized via adam algorithm with a learning rate of 0.02 for 25 epochs. 392 char+char3 char+oracle char3+oracle Avg SG IOB% Avg SG IOB% Avg SG IOB% Czech 76.24 76.26 2.03 80.36 81.06 0.49 80.57 81.10 0.55 Finnish 70.31 70.29 4.58 72.73 72.88 2.42 72.72 73.02 2.62 Turkish 59.43 59.39 6.34 61.98 62.07 4.53 60.56 60.74 2.28 Spanish 70.01 70.05 3.16 71.80 71.75 3.47 71.64 71.62 3.24 Catalan 72.79 72.71 2.03 74.80 74.82 2.16 75.15 75.18 2.66 German 66.84 66.97 2.15 71.02 71.16 2.62 71.31 71.25 2.84 Table 4: Results of ensembling via averaging (Avg) and stack generalization (SG). IOB: Improvement Over Best of baseline models the learning algorithm, training data or the hyperparameters. To ensure that the only factor contributing to the diversity of the learners is the input representation, all parameters, training data and model settings are left unchanged. Our results are given in Table 4. IOB shows the improvement over the best of the baseline models in the ensemble. Averaging and stacking methods gave similar results, meaning that there is no immediate nonlinear relations between units. We observe two language clusters: (1) Czech and agglutinative languages (2) Spanish, Catalan, German and English. The common property of that separate clusters are (1) high OOV% and (2) relatively low OOV%. Amongst the first set, we observe that the improvement gained by character-morphology ensembles is higher (shown with green) than ensembles between characters and character trigrams (shown with red), whereas the opposite is true for the second set of languages. It can be interpreted as character level models being more similar to the morphology level models for the first cluster, i.e., languages with high OOV%, and characters and morphology being more diverse for the second cluster. 6 Limitations and Strengths To expand our understanding and reveal the limitations and strengths of the models, we analyze their ability to handle long range dependencies, their relation with training data and model size; and measure their performances on out of domain data. 6.1 Long Range Dependencies Long range dependency is considered as an important linguistic issue that is hard to solve. Therefore the ability to handle it is a strong performance indicator. To gain insights on this issue, we measure how models perform as the distance between the predicate and the argument increases. The unit of measure is number of tokens between the two; and argument is defined as the head of the argument phrase in accordance with dependency-based SRL task. For that purpose, we created bins of [0-4], [5-9], [10-14] and [15-19] distances. Then, we have calculate F1 scores for arguments in each bin. Due to low number of predicate-argument pairs in buckets, we could not analyze German and Turkish; and also the bin [15-19] is only used for Czech. Our results are shown in Fig. 3. We observe that either char or char3 closely follows the oracle for all languages. The gap between the two does not increase with the distance, suggesting that the performance gap is not related to long range dependencies. In other words, both characters and the oracle handle long range dependencies equally well. 6.2 Training Data Size We analyzed how char3 and oracle models perform with respect to the training data size. For that purpose, we trained them on chunks of increasing size and evaluate on the provided test split. We used units of 2000 sentences for German and Czech; and 400 for Turkish. Results are shown in Fig. 4. Apparently as the data size increases, the performances of both models logarithmically increase - with a varying speed. To speak in statistical terms, we fit a logarithmic curve to the observed F1 scores (shown with transparent lines) and check the x coefficients, where x refers to the number of sentences. This coefficient can be considered as an approximation to the speed of growth with data size. We observe that the coefficient is higher for char3 than oracle for all languages. It can be interpreted as: in the presence of more training data, char3 may surpass the oracle; i.e., char3 relies on data more than the oracle. 6.3 Out-of-Domain (OOD) Data As part of the CoNLL09 shared task (Hajiˇc et al., 2009), out of domain test sets are provided for 393 Figure 3: X axis: Distance between the predicate and the argument, Y axis: F1 scores on argument labels Figure 4: Performance of units w.r.t training data size. X axis: Number of sentences, Y axis: F1 score word char IOW% char3 IOW% oracle IOW% IOC% CZE 69.97 72.98 4.30 73.24 4.67 72.28 3.30 -1.31 DEU 51.50 57.05 10.78 55.75 8.24 38.51 -25.24 -45.17 ENG 66.47 68.83 0.70 70.22 0.23 Table 5: F1 scores on out of domain data. Best scores are shown with bold. three languages: Czech, German and English. We test our models trained on regular training dataset on these OOD data. The results are given in Table 5. Here, we clearly see that the best model has shifted from oracle to character based models. The dramatic drop in German oracle model is due to the high lemma OOV rate which is a consequence of keeping compounds as a single lemma. Czech oracle model performs reasonably however is unable to beat the generalization power of the char3 model. Furthermore, the scores of the character models in Table 5 are higher than the best OOD scores reported in the shared task (Hajiˇc et al., 2009); even though our main results on evaluation set are not (except for Czech). This shows that character-level models have increased robustness to out-of-domain data due to their ability to learn regularities among data. 6.4 Model Size Throughout this paper, our aim was to gain insights on how models perform on different languages rather than scoring the highest F1. For this reason, we used a model that can be considered small when compared to recent neural SRL models and avoided parameter search. However, char3 oracle F1 I (%) F1 I (%) Finnish ℓ= 1 67.78 71.15 ℓ= 2 67.62 -0.2 75.71 6.4 Turkish ℓ= 1 56.60 59.38 ℓ= 2 56.93 0.5 61.02 2.7 Spanish ℓ= 1 68.43 69.39 ℓ= 2 69.30 1.3 71.56 3.1 Catalan ℓ= 1 71.34 73.24 ℓ= 2 71.71 0.5 74.84 2.2 Table 6: Effect of layer size on model performances. I: Improvement over model with one layer. we wonder how the models behave when given a larger network. To answer this question, we trained char3 and oracle models with more layers for two fusional languages (Spanish, Catalan), and two agglutinative languages (Finnish, Turkish). The results given in Table 6 clearly shows that model complexity provides relatively more benefit to morphological models. This indicates that morphological signals help to extract more complex linguistic features that have semantic clues. 6.5 Predicted Morphological Tags Although models with access to gold morphological tags achieve better F1 scores than character models, they can be less useful a in reallife scenario since they require gold tags at test time. To predict the performance of morphologylevel models in such a scenario, we train the same models with the same parameters with predicted morphological features. Predicted tags 394 Figure 5: F1 scores for best-char (best of the CLMs) and model with predicted (predictedmorph) and gold morphological tags (goldmorph). were only available for German, Spanish, Catalan and Czech. Our results given in Fig. 5, show that (except for Czech), predicted morphological tags are not as useful as characters alone. 7 Conclusion Character-level neural models are becoming the defacto standard for NLP problems due to their accessibility and ability to handle unseen data. In this work, we investigated how they compare to models with access to gold morphological analysis, on a sentence-level semantic task. We evaluated their quality on semantic role labeling in a number of agglutinative and fusional languages. Our results lead to the following conclusions: • For in-domain data, character-level models cannot yet match the performance of morphology-level models. However, they still provide considerable advantages over whole-word models, • Their shortcomings depend on the morphology type. For agglutinative languages, their performance is limited on data with rich derivational morphology and high contextual ambiguity (morphological disambiguation); and for fusional languages, they struggle on tokens with high number of morphological tags, • Similarity between character and morphology-level models is higher than the similarity within character-level (char and char-trigram) models on languages with high OOV%; and vice versa, • Their ability to handle long-range dependencies is very similar to morphology-level models, • They rely relatively more on training data size. Therefore, given more training data their performance will improve faster than morphology-level models, • They perform substantially well on out of domain data, surpassing all morphology-level models. However, relatively less improvement is expected when model complexity is increased, • They generally perform better than models that only have access to predicted/silver morphological tags. 8 Acknowledgements G¨ozde G¨ul S¸ahin was a PhD student at Istanbul Technical University and a visiting research student at University of Edinburgh during this study. She was funded by T¨ubitak (The Scientific and Technological Research Council of Turkey) 2214A scholarship during her visit to University of Edinburgh. She was granted access to CoNLL-09 Semantic Role Labeling Shared Task data by Linguistic Data Consortium (LDC). This work was supported by ERC H2020 Advanced Fellowship GA 742137 SEMANTAX and a Google Faculty award to Mark Steedman. We would like to thank Adam Lopez for fruitful discussions, guidance and support during the first author’s visit. References Nart Bedin Atalay, Kemal Oflazer, and Bilge Say. 2003. The Annotation Process in the Turkish Treebank. In Proceedings of 4th International Workshop on Linguistically Interpreted Corpora, LINC at EACL 2003, Budapest, Hungary, April 13-14, 2003. Yonatan Belinkov, Nadir Durrani, Fahim Dalvi, Hassan Sajjad, and James R. Glass. 2017. What do Neural Machine Translation Models Learn about Morphology? In Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics, ACL 2017, Vancouver, Canada, July 30 - August 4, Volume 1: Long Papers. pages 861–872. Ronan Collobert, Jason Weston, Leon Bottou, Michael Karlen, Koray Kavukcuoglu, and Pavel Kuksa. 2011. Natural Language Processing (almost) from Scratch. Journal of Machine Learning Research 12:2461–2505. 395 Timothy Dozat, Peng Qi, and Christopher D Manning. 2017. Stanford’s Graph-based Neural Dependency Parser at the CoNLL 2017 Shared Task. Proceedings of the CoNLL 2017 Shared Task: Multilingual Parsing from Raw Text to Universal Dependencies pages 20–30. Nicholas FitzGerald, Oscar T¨ackstr¨om, Kuzman Ganchev, and Dipanjan Das. 2015. Semantic Role Labeling with Neural Network Factors. In EMNLP. pages 960–970. Felix A. Gers, J¨urgen A. Schmidhuber, and Fred A. Cummins. 2000. Learning to Forget: Continual Prediction with LSTM. Neural Comput. 12(10):2451– 2471. Alex Graves and J¨urgen Schmidhuber. 2005. Framewise phoneme classification with bidirectional LSTM and other neural network architectures. Neural Networks 18(5-6):602–610. Jan Hajiˇc, Massimiliano Ciaramita, Richard Johansson, Daisuke Kawahara, Maria Ant`onia Mart´ı, Llu´ıs M`arquez, Adam Meyers, Joakim Nivre, Sebastian Pad´o, Jan ˇStˇep´anek, Pavel Straˇn´ak, Mihai Surdeanu, Nianwen Xue, and Yi Zhang. 2009. The CoNLL2009 Shared Task: Syntactic and Semantic Dependencies in Multiple Languages. In Proceedings of the Thirteenth Conference on Computational Natural Language Learning: Shared Task. Association for Computational Linguistics, Stroudsburg, PA, USA, CoNLL ’09, pages 1–18. Jan Hajiˇc, Massimiliano Ciaramita, Richard Johansson, Adam Meyers, Jan ˇStˇep´anek, Joakim Nivre, Pavel Straˇn´ak, Mihai Surdeanu, Nianwen Xue, and Yi Zhang. 2012a. 2009 CoNLL Shared Task Part 1 LDC2012T04. Web Download. Jan Hajiˇc, Maria A. Mart´ı, Lluis Marquez, Joakim Nivre, Jan ˇStˇep´anek, Sebastian Pad´o, and Pavel Straˇn´ak. 2012b. 2009 CoNLL Shared Task Part 1 LDC2012T03. Web Download. Katri Haverinen, Jenna Kanerva, Samuel Kohonen, Anna Missila, Stina Ojala, Timo Viljanen, Veronika Laippala, and Filip Ginter. 2015. The Finnish Proposition Bank. Language Resources and Evaluation 49(4):907–926. Luheng He, Kenton Lee, Mike Lewis, and Luke Zettlemoyer. 2017. Deep semantic role labeling: What works and what’s next. In Proceedings of the Annual Meeting of the Association for Computational Linguistics. Yoon Kim, Yacine Jernite, David Sontag, and Alexander M Rush. 2016. Character-Aware Neural Language Models. In AAAI. pages 2741–2749. Jason Lee, Kyunghyun Cho, and Thomas Hofmann. 2017. Fully Character-Level Neural Machine Translation without Explicit Segmentation. TACL 5:365– 378. Wang Ling, Tiago Luis, Luis Marujo, Ramon F Astudillo, Silvio Amir, Chris Dyer, Alan W Black, and Isabel Trancoso. 2015. Finding function in form: Compositional character models for open vocabulary word representation. In EMNLP. pages 1520– 1530. Diego Marcheggiani, Anton Frolov, and Ivan Titov. 2017. A Simple and Accurate Syntax-Agnostic Neural Model for Dependency-based Semantic Role Labeling. In Proceedings of the 21st Conference on Computational Natural Language Learning (CoNLL 2017). Association for Computational Linguistics, Vancouver, Canada, pages 411–420. Diego Marcheggiani and Ivan Titov. 2017. Encoding Sentences with Graph Convolutional Networks for Semantic Role Labeling. In Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing. Association for Computational Linguistics, Copenhagen, Denmark, pages 1507–1516. Kemal Oflazer, Bilge Say, Dilek Zeynep Hakkani-T¨ur, and G¨okhan T¨ur. 2003. Building a Turkish treebank. In Treebanks, Springer, pages 261–277. Barbara Plank, Anders Søgaard, and Yoav Goldberg. 2016. Multilingual Part-of-Speech Tagging with Bidirectional Long Short-Term Memory Models and Auxiliary Loss. In Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics, ACL 2016, August 7-12, 2016, Berlin, Germany, Volume 2: Short Papers. G¨ozde G¨ul S¸ahin and Es¸ref Adalı. 2017. Annotation of semantic roles for the Turkish Proposition Bank. Language Resources and Evaluation pages 1–34. Cicero D Santos and Bianca Zadrozny. 2014. Learning character-level representations for part-of-speech tagging. In Proceedings of the 31st International Conference on Machine Learning (ICML-14). pages 1818–1826. Umut Sulubacak and G¨uls¸en Eryi˘git. 2018. Implementing Universal Dependency, Morphology and Multiword Expression Annotation Standards for Turkish Language Processing. Turkish Journal of Electrical Engineering Computer Sciences pages 1– 23. Umut Sulubacak, Tu˘gba Pamay, and G¨uls¸en Eryi˘git. 2016. IMST: A Revisited Turkish Dependency Treebank. In Proceedings of the 1st International Conference on Turkic Computational Linguistics (TurCLing) at CICLing, Konya, Turkey, 2016. Clara Vania and Adam Lopez. 2017. From Characters to Words to in Between: Do We Capture Morphology? In Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics, ACL 2017, Vancouver, Canada, July 30 - August 4, Volume 1: Long Papers. pages 2016–2027. 396 Zhen Wang, Tingsong Jiang, Baobao Chang, and Zhifang Sui. 2015. Chinese Semantic Role Labeling with Bidirectional Recurrent Neural Networks. In EMNLP. pages 1626–1631. Jie Zhou and Wei Xu. 2015. End-to-end learning of semantic role labeling using recurrent neural networks. In Proceedings of the Annual Meeting of the Association for Computational Linguistics. pages 1127–1137. G¨ozde G¨ul S¸ahin. 2016a. Framing of Verbs for Turkish PropBank. In In Proceedings of 1st International Conference on Turkic Computational Linguistics, TurCLing. G¨ozde G¨ul S¸ahin. 2016b. Verb Sense Annotation for Turkish PropBank via Crowdsourcing. In Computational Linguistics and Intelligent Text Processing - 17th International Conference, CICLing 2016, Konya, Turkey, April 3-9, 2016, Revised Selected Papers, Part I. pages 496–506.
2018
36
Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Long Papers), pages 397–407 Melbourne, Australia, July 15 - 20, 2018. c⃝2018 Association for Computational Linguistics 397 AMR Parsing as Graph Prediction with Latent Alignment Chunchuan Lyu1 Ivan Titov1,2 1ILCC, School of Informatics, University of Edinburgh 2ILLC, University of Amsterdam Abstract Abstract meaning representations (AMRs) are broad-coverage sentence-level semantic representations. AMRs represent sentences as rooted labeled directed acyclic graphs. AMR parsing is challenging partly due to the lack of annotated alignments between nodes in the graphs and words in the corresponding sentences. We introduce a neural parser which treats alignments as latent variables within a joint probabilistic model of concepts, relations and alignments. As exact inference requires marginalizing over alignments and is infeasible, we use the variational autoencoding framework and a continuous relaxation of the discrete alignments. We show that joint modeling is preferable to using a pipeline of align and parse. The parser achieves the best reported results on the standard benchmark (74.4% on LDC2016E25). 1 Introduction Abstract meaning representations (AMRs) (Banarescu et al., 2013) are broad-coverage sentencelevel semantic representations. AMR encodes, among others, information about semantic relations, named entities, co-reference, negation and modality. The semantic representations can be regarded as rooted labeled directed acyclic graphs (see Figure 1). As AMR abstracts away from details of surface realization, it is potentially beneficial in many semantic related NLP tasks, including text summarization (Liu et al., 2015; Dohare and Karnick, 2017), machine translation (Jones et al., 2012) and question answering (Mitra and Baral, 2016). The boys must not go ARG2 polarity ARG0 boy go-02 obligate-01 1 3 2 4 Figure 1: An example of AMR, the dashed lines denote latent alignments, obligate-01 is the root. Numbers indicate depth-first traversal order. AMR parsing has recently received a lot of attention (e.g., (Flanigan et al., 2014; Artzi et al., 2015; Konstas et al., 2017)). One distinctive aspect of AMR annotation is the lack of explicit alignments between nodes in the graph (concepts) and words in the sentences. Though this arguably simplified the annotation process (Banarescu et al., 2013), it is not straightforward to produce an effective parser without relying on an alignment. Most AMR parsers (Damonte et al., 2017; Flanigan et al., 2016; Werling et al., 2015; Wang and Xue, 2017; Foland and Martin, 2017) use a pipeline where the aligner training stage precedes training a parser. The aligners are not directly informed by the AMR parsing objective and may produce alignments suboptimal for this task. In this work, we demonstrate that the alignments can be treated as latent variables in a joint probabilistic model and induced in such a way as to be beneficial for AMR parsing. Intuitively, in our probabilistic model, every node in a graph is assumed to be aligned to a word in a sentence: each concept is predicted based on the corresponding RNN state. Similarly, graph edges (i.e. relations) are predicted based on representations of concepts and aligned words (see Figure 2). As alignments are latent, exact inference requires marginalizing over latent alignments, which is in398 feasible. Instead we use variational inference, specifically the variational autoencoding framework of Kingma and Welling (2014). Using discrete latent variables in deep learning has proven to be challenging (Mnih and Gregor, 2014; Bornschein and Bengio, 2015). We use a continuous relaxation of the alignment problem, relying on the recently introduced Gumbel-Sinkhorn construction (Mena et al., 2018). This yields a computationally-efficient approximate method for estimating our joint probabilistic model of concepts, relations and alignments. We assume injective alignments from concepts to words: every node in the graph is aligned to a single word in the sentence and every word is aligned to at most one node in the graph. This is necessary for two reasons. First, it lets us treat concept identification as sequence tagging at test time. For every word we would simply predict the corresponding concept or predict NULL to signify that no concept should be generated at this position. Secondly, Gumbel-Sinkhorn can only work under this assumption. This constraint, though often appropriate, is problematic for certain AMR constructions (e.g., named entities). In order to deal with these cases, we re-categorized AMR concepts. Similar recategorization strategies have been used in previous work (Foland and Martin, 2017; Peng et al., 2017). The resulting parser achieves 74.4% Smatch score on the standard test set when using LDC2016E25 training set,1 an improvement of 3.4% over the previous best result (van Noord and Bos, 2017). We also demonstrate that inducing alignments within the joint model is indeed beneficial. When, instead of inducing alignments, we follow the standard approach and produce them on preprocessing, the performance drops by 0.9% Smatch. Our main contributions can be summarized as follows: • we introduce a joint probabilistic model for alignment, concept and relation identification; • we demonstrate that a continuous relaxation can be used to effectively estimate the model; • the model achieves the best reported results.2 1The standard deviation across multiple training runs was 0.16%. 2The code can be accessed from https://github. com/ChunchuanLv/AMR_AS_GRAPH_PREDICTION 2 Probabilistic Model In this section we describe our probabilistic model and the estimation technique. In section 3, we describe preprocessing and post-processing (including concept re-categorization, sense disambiguation, wikification and root selection). 2.1 Notation and setting We will use the following notation throughout the paper. We refer to words in the sentences as w = (w1, . . . , wn), where n is sentence length, wk ∈V for k ∈{1 . . . , n}. The concepts (i.e. labeled nodes) are c = (c1, . . . , cm), where m is the number of concepts and ci ∈C for i ∈{1 . . . , m}. For example, in Figure 1, c = (obligate, go, boy, -).3 Note that senses are predicted at post-processing, as discussed in Section 3.2 (i.e. go is labeled as go-02). A relation between ‘predicate concept’ i and ‘argument concept’ j is denoted by rij ∈R; it is set to NULL if j is not an argument of i. In our example, r2,3 = ARG0 and r1,3 = NULL. We will use R to denote all relations in the graph. To represent alignments, we will use a = {a1, . . . , am}, where ai ∈{1, . . . , n} returns the index of a word aligned to concept i. In our example, a1 = 3. All three model components rely on bidirectional LSTM encoders (Schuster and Paliwal, 1997). We denote states of BiLSTM (i.e. concatenation of forward and backward LSTM states) as hk ∈Rd (k ∈{1, . . . , n}). The sentence encoder takes pre-trained fixed word embeddings, randomly initialized lemma embeddings, part-ofspeech and named-entity tag embeddings. 2.2 Method overview We believe that using discrete alignments, rather than attention-based models (Bahdanau et al., 2015) is crucial for AMR parsing. AMR banks are a lot smaller than parallel corpora used in machine translation (MT) and hence it is important to inject a useful inductive bias. We constrain our alignments from concepts to words to be injective. First, it encodes the observation that concepts are mostly triggered by single words (especially, after re-categorization, Section 3.1). Second, it implies 3The probabilistic model is invariant to the ordering of concepts, though the order affects the inference algorithm (see Section 2.5). We use depth-first traversal of the graph to generate the ordering. 399 go-02 The boys must not go ? boy obligate-01 ARG0 Classifier RNN encoder Figure 2: Relation identification: predicting a relation between boy and go-02 relying on the two concepts and corresponding RNN states. that each word corresponds to at most one concept (if any). This encourages competition: alignments are mutually-repulsive. In our example, obligate is not lexically similar to the word must and may be hard to align. However, given that other concepts are easy to predict, alignment candidates other than must and the will be immediately ruled out. We believe that these are the key reasons for why attention-based neural models do not achieve competitive results on AMR (Konstas et al., 2017) and why state-of-the-art models rely on aligners. Our goal is to combine best of two worlds: to use alignments (as in state-of-the-art AMR methods) and to induce them while optimizing for the end goal (similarly to the attention component of encoder-decoder models). Our model consists of three parts: (1) the concept identification model Pθ(c|a, w); (2) the relation identification model Pφ(R|a, w, c) and (3) the alignment model Qψ(a|c, R, w).4 Formally, (1) and (2) together with the uniform prior over alignments P(a) form the generative model of AMR graphs. In contrast, the alignment model Qψ(a|c, R, w), as will be explained below, is approximating the intractable posterior Pθ,φ(a|c, R, w) within that probabilistic model. In other words, we assume the following model for generating the AMR graph: Pθ,φ(c, R|w)= X a P(a)Pθ(c|a, w)Pφ(R|a, w, c) = X a P(a) m Y i=1 P(ci|hai) m Y i,j=1 P(rij|hai,ci,haj,cj) 4θ, φ and ψ denote all parameters of the models. AMR concepts are assumed to be generated conditional independently relying on the BiLSTM states and surface forms of the aligned words. Similarly, relations are predicted based only on AMR concept embeddings and LSTM states corresponding to words aligned to the involved concepts. Their combined representations are fed into a bi-affine classifier (Dozat and Manning, 2017) (see Figure 2). The expression involves intractable marginalization over all valid alignments. As standard in variational autoencoders, VAEs (Kingma and Welling, 2014), we lower-bound the loglikelihood as log Pθ,φ(c, R|w) ≥EQ[log Pθ(c|a, w)Pφ(R|a, w, c)] −DKL(Qψ(a|c, R, w)||P(a)), (1) where Qψ(a|c, R, w) is the variational posterior (aka the inference network), EQ[. . .] refers to the expectation under Qψ(a|c, R, w) and DKL is the Kullback-Liebler divergence. In VAEs, the lower bound is maximized both with respect to model parameters (θ and φ in our case) and the parameters of the inference network (ψ). Unfortunately, gradient-based optimization with discrete latent variables is challenging. We use a continuous relaxation of our optimization problem, where realvalued vectors ˆai ∈Rn (for every concept i) approximate discrete alignment variables ai. This relaxation results in low-variance estimates of the gradient using the parameterization trick (Kingma and Welling, 2014), and ensures fast and stable training. We will describe the model components and the relaxed inference procedure in detail in sections 2.6 and 2.7. Though the estimation procedure requires the use of the relaxation, the learned parser is straightforward to use. Given our assumptions about the alignments, we can independently choose for each word wk (k = 1, . . . , m) the most probably concept according to Pθ(c|hk). If the highest scoring option is NULL, no concept is introduced. The relations could then be predicted relying on Pφ(R|a, w, c). This would have led to generating inconsistent AMR graphs, so instead we search for the highest scoring valid graph (see Section 3.2). Note that the alignment model Qψ is not used at test time and only necessary to train accurate concept and relation identification models. 400 2.3 Concept identification model The concept identification model chooses a concept c (i.e. a labeled node) conditioned on the aligned word k or decides that no concept should be introduced (i.e. returns NULL). Though it can be modeled with a softmax classifier, it would not be effective in handling rare or unseen words. First, we split the decision into estimating the probability of concept category τ(c) ∈T (e.g. ‘number’, ’frame’) and estimating the probability of the specific concept within the chosen category. Second, based on a lemmatizer and training data5 we prepare one candidate concept ek for each word k in vocabulary (e.g., it would propose want if the word is wants). Similar to Luong et al. (2015), our model can then either copy the candidate ek or rely on the softmax over potential concepts of category τ. Formally, the concept prediction model is defined as Pθ(c|hk, wk) = P(τ(c)|hk, wk)× [[ek = c]] × exp(vT copyhk) + exp(vT c hk) Z(hk, θ) , where the first multiplicative term is a softmax classifier over categories (including NULL); vcopy, vc ∈Rd (for c ∈C) are model parameters; [[. . .]] denotes the indicator function and equals 1 if its argument is true and 0, otherwise; Z(h, θ) is the partition function ensuring that the scores sum to 1. 2.4 Relation identification model We use the following arc-factored relation identification model: Pφ(R|a, w, c) = m Y i,j=1 P(rij|hai,ci,haj,cj) (2) Each term is modeled in exactly the same way: 1. for both endpoints, embedding of the concept c is concatenated with the RNN state h; 2. they are linearly projected to a lower dimension separately through Mh(hai ◦ci) ∈Rdf and Md(haj ◦cj) ∈Rdf , where ◦denotes concatenation; 3. a log-linear model with bilinear scores Mh(hai ◦ci)T CrMd(haj ◦cj), Cr ∈Rdf×df is used to compute the probabilities. 5See supplementary materials. In the above discussion, we assumed that BiLSTM encodes a sentence once and the BiLSTM states are then used to predict concepts and relations. In semantic role labeling, the task closely related to the relation identification stage of AMR parsing, a slight modification of this approach was shown more effective (Zhou and Xu, 2015; Marcheggiani et al., 2017). In that previous work, the sentence was encoded by a BiLSTM once per each predicate (i.e. verb) and the encoding was in turn used to identify arguments of that predicate. The only difference across the re-encoding passes was a binary flag used as input to the BiLSTM encoder at each word position. The flag was set to 1 for the word corresponding to the predicate and to 0 for all other words. In that way, BiLSTM was encoding the sentence specifically for predicting arguments of a given predicate. Inspired by this approach, when predicting label rij for j ∈{1, . . . m}, we input binary flags p1, . . . pn to the BiLSTM encoder which are set to 1 for the word indexed by ai (pai = 1) and to 0 for other words (pj = 0, for j ̸= ai). This also means that BiLSTM encoders for predicting relations and concepts end up being distinct. We use this multi-pass approach in our experiments.6 2.5 Alignment model Recall that the alignment model is only used at training, and hence it can rely both on input (states h1, . . . , hn) and on the list of concepts c1, . . . , cm. Formally, we add (m−n) NULL concepts to the list.7 Aligning a word to any NULL, would correspond to saying that the word is not aligned to any ‘real’ concept. Note that each one-to-one alignment (i.e. permutation) between n such concepts and n words implies a valid injective alignment of n words to m ‘real’ concepts. This reduction to permutations will come handy when we turn to the Gumbel-Sinkhorn relaxation in the next section. Given this reduction, from now on, we will assume that m = n. As with sentences, we use a BiLSTM model to encode concepts c, where gi ∈Rdg, i ∈ {1, . . . , n}. We use a globally-normalized align6Using the vanilla one-pass model from equation (2) results in 1.4% drop in Smatch score. 7After re-categorization (Section 3.1), m ≥n holds for most cases. For exceptions, we append NULL to the sentence. 401 ment model: Qψ(a|c, R, w) = exp(Pn i=1 ϕ(gi, hai)) Zψ(c, w) , where Zψ(c, w) is the intractable partition function and the terms ϕ(gi, hai) score each alignment link according to a bilinear form ϕ(gi, hai) = gT i Bhai, (3) where B ∈Rdg×d is a parameter matrix. 2.6 Estimating model with Gumbel-Sinkhorn Recall that our learning objective (1) involves expectation under the alignment model. The partition function of the alignment model Zψ(c, w) is intractable, and it is tricky even to draw samples from the distribution. Luckily, the recently proposed relaxation (Mena et al., 2018) lets us circumvent this issue. First, note that exact samples from a categorical distribution can be obtained using the perturb-and-max technique (Papandreou and Yuille, 2011). For our alignment model, it would correspond to adding independent noise to the score for every possible alignment and choosing the highest scoring one: a⋆= argmax a∈P n X i=1 ϕ(gi, hai) + ϵa, (4) where P is the set of all permutations of n elements, ϵa is a noise drawn independently for each a from the fixed Gumbel distribution (G(0, 1)). Unfortunately, this is also intractable, as there are n! permutations. Instead, in perturband-max an approximate schema is used where noise is assumed factorizable. In other words, first noisy scores are computed as ˆϕ(gi, hai) = ϕ(gi, hai) + ϵi,ai, where ϵi,ai ∼G(0, 1) and an approximate sample is obtained by a⋆ = argmaxa Pn i=1 ˆϕ(gi, hai), Such sampling procedure is still intractable in our case and also non-differentiable. The main contribution of Mena et al. (2018) is approximating this argmax with a simple differentiable computation ˆa = St(Φ, Σ) which yields an approximate (i.e. relaxed) permutation. We use Φ and Σ to denote the n × n matrices of alignment scores ϕ(gi, hk) and noise variables ϵik, respectively. Instead of returning index ai for every concept i, it would return a (peaky) distribution over words ˆai. The peakiness is controlled by the temperature parameter t of Gumbel-Sinkhorn which balances smoothness (‘differentiability’) vs. bias of the estimator. For further details and the derivation, we refer the reader to the original paper (Mena et al., 2018). Note that Φ is a function of the alignment model Qψ, so we will write Φψ in what follows. The variational bound (1) can now be approximated as EΣ∼G(0,1)[log Pθ(c|St(Φψ, Σ), w) + log Pφ(R|St(Φψ, Σ), w, c)] −DKL(Φψ + Σ t ||Σ t0 ) (5) Following Mena et al. (2018), the original KL term from equation (1) is approximated by the KL term between two n × n matrices of i.i.d. Gumbel distributions with different temperature and mean. The parameter t0 is the ‘prior temperature’. Using the Gumbel-Sinkhorn construction unfortunately does not guarantee that P i ˆaij = 1. To encourage this equality to hold, and equivalently to discourage overlapping alignments, we add another regularizer to the objective (5): Ω(ˆa, λ) = λ X j max( X i (ˆaij) −1, 0). (6) Our final objective is fully differentiable with respect to all parameters (i.e. θ, φ and ψ) and has low variance as sampling is performed from the fixed non-parameterized distribution, as in standard VAEs. 2.7 Relaxing concept and relation identification One remaining question is how to use the soft input ˆa = St(Φψ, Σ) in the concept and relation identification models in equation (5). In other words, we need to define how we compute Pθ(c|St(Φψ, Σ), w) and Pφ(R|St(Φψ, Σ), w, c). The standard technique would be to pass to the models expectations under the relaxed variables Pn k=1 ˆaikhk, instead of the vectors hai (Maddison et al., 2017; Jang et al., 2017). This is what we do for the relation identification model. We use this approach also to relax the one-hot encoding of the predicate position (p, see Section 2.4). However, the concept prediction model log Pθ(c|St(Φψ, Σ), w) relies on the pointing mechanism, i.e. directly exploits the words w rather than relies only on biLSTM states hk. So 402 thing The opinion of the boy boy ARG0 thing(opinion) opine-01 concept(boy) primary primary secondary AR1 Re-categorized Concepts 2 1 3 1 2 ARG1 Figure 3: An example of re-categorized AMR. AMR graph at the top, re-categorized concepts in the middle, and the sentence is at the bottom. instead we treat ˆai as a prior in a hierarchical model: logPθ(ci|ˆai, w) ≈log n X k=1 ˆaikPθ(ci|ai = k, w) (7) As we will show in our experiments, a softer version of the loss is even more effective: logPθ(ci|ˆai, w) ≈log n X k=1 (ˆaikPθ(ci|ai = k, w))α, (8) where we set the parameter α = 0.5. We believe that using this loss encourages the model to more actively explore the alignment space. Geometrically, the loss surface shaped as a ball in the 0.5norm space would push the model away from the corners, thus encouraging exploration. 3 Pre- and post-pocessing 3.1 Re-Categorization AMR parsers often rely on a pre-processing stage, where specific subgraphs of AMR are grouped together and assigned to a single node with a new compound category (e.g., Werling et al. (2015); Foland and Martin (2017); Peng et al. (2017)); this transformation is reversed at the post-processing stage. Our approach is very similar to the Factored Concept Label system of Wang and Xue (2017), with one important difference that we unpack our concepts before the relation identification stage, so the relations are predicted between original concepts (all nodes in each group share the same alignment distributions to the RNN states). Intuitively, the goal is to ensure that concepts rarely lexically triggered (e.g., thing in Figure 3) get grouped together with lexically triggered nodes. Such ‘primary’ concepts get encoded in the category of the concept (the set of categories is τ, see also section 2.3). In Figure 3, the re-categorized concept thing(opinion) is produced from thing and opine-01. We use concept as the dummy category type. There are 8 templates in our system which extract re-categorizations for fixed phrases (e.g. thing(opinion)), and a deterministic system for grouping lexically flexible, but structurally stable sub-graphs (e.g., named entities, have-rel-role91 and have-org-role-91 concepts). Details of the re-categorization procedure and other pre-processing are provided in appendix. 3.2 Post-processing For post-processing, we handle sensedisambiguation, wikification and ensure legitimacy of the produced AMR graph. For sense disambiguation we pick the most frequent sense for that particular concept (‘-01’, if unseen). For wikification we again look-up in the training set and default to ”-”. There is certainly room for improvement in both stages. Our probability model predicts edges conditional independently and thus cannot guarantee the connectivity of AMR graph, also there are additional constraints which are useful to impose. We enforce three constraints: (1) specific concepts can have only one neighbor (e.g., ‘number’ and ‘string’; see appendix for details); (2) each predicate concept can have at most one argument for each relation r ∈ R; (3) the graph should be connected. Constraint (1) is addressed by keeping only the highest scoring neighbor. In order to satisfy the last two constraints we use a simple greedy procedure. First, for each edge, we pick-up the highest scoring relation and edge (possibly NULL). If the constraint (2) is violated, we simply keep the highest scoring edge among the duplicates and drop the rest. If the graph is not connected (i.e. constraint (3) is violated), we greedily choose edges linking the connected components until the graph gets connected (MSCG in Flanigan et al. (2014)). Finally, we need to select a root node. Similarly to relation identification, for each candidate concept ci, we concatenate its embedding with the corresponding LSTM state (hai) and use these scores in a softmax classifier over all the concepts. 403 Model Data Smatch JAMR (Flanigan et al., 2016) R1 67.0 AMREager (Damonte et al., 2017) R1 64.0 CAMR (Wang et al., 2016) R1 66.5 SEQ2SEQ + 20M (Konstas et al., 2017) R1 62.1 Mul-BiLSTM (Foland and Martin, 2017) R1 70.7 Ours R1 73.7 Neural-Pointer (Buys and Blunsom, 2017) R2 61.9 ChSeq (van Noord and Bos, 2017) R2 64.0 ChSeq + 100K (van Noord and Bos, 2017) R2 71.0 Ours R2 74.4 ± 0.16 Table 1: Smatch scores on the test set. R2 is LDC2016E25 dataset, and R1 is LDC2015E86 dataset. Statistics on R2 are over 8 runs. 4 Experiments and Discussion 4.1 Data and setting We primarily focus on the most recent LDC2016E25 (R2) dataset, which consists of 36521, 1368 and 1371 sentences in training, development and testing sets, respectively. The earlier LDC2015E86 (R1) dataset has been used by much of the previous work. It contains 16833 training sentences, and same sentences for development and testing as R2.8 We used the development set to perform model selection and hyperparameter tuning. The hyperparameters, as well as information about embeddings and pre-processing, are presented in the supplementary materials. We used Adam (Kingma and Ba, 2014) to optimize the loss (5) and to train the root classifier. Our best model is trained fully jointly, and we do early stopping on the development set scores. Training takes approximately 6 hours on a single GeForce GTX 1080 Ti with Intel Xeon CPU E52620 v4. 4.2 Experiments and discussion We start by comparing our parser to previous work (see Table 1). Our model substantially outperforms all the previous models on both datasets. Specifically, it achieves 74.4% Smatch score on LDC2016E25 (R2), which is an improvement of 3.4% over character seq2seq model relying on silver data (van Noord and Bos, 2017). For LDC2015E86 (R1), we obtain 73.7% Smatch score, which is an improvement of 3.0% over 8Annotation in R2 has also been slightly revised. Models A’ C’ J’ Ch’ Ours 17 16 16 17 Dataset R1 R1 R1 R2 R2 Smatch 64 63 67 71 74.4±0.16 Unlabeled 69 69 69 74 77.1±0.10 No WSD 65 64 68 72 75.5±0.12 Reentrancy 41 41 42 52 52.3±0.43 Concepts 83 80 83 82 85.9±0.11 NER 83 75 79 79 86.0±0.46 Wiki 64 0 75 65 75.7±0.30 Negations 48 18 45 62 58.4±1.32 SRL 56 60 60 66 69.8±0.24 Table 2: F1 scores on individual phenomena. A’17 is AMREager, C’16 is CAMR, J’16 is JAMR, Ch’17 is ChSeq+100K. Ours are marked with standard deviation. Metric PreR1 PreR2 Align Align mean Smatch 72.8 73.7 73.5 74.4 Unlabeled 75.3 76.3 76.1 77.1 No WSD 73.8 74.7 74.6 75.5 Reentrancy 50.2 50.6 52.6 52.3 Concepts 85.4 85.5 85.5 85.9 NER 85.3 84.8 85.3 86.0 Wiki 66.8 75.6 67.8 75.7 Negations 56.0 57.2 56.6 58.4 SRL 68.8 68.9 70.2 69.8 Table 3: F1 scores of on subtasks. Scores on ablations are averaged over 2 runs. The left side results are from LDC2015E86 and right results are from LDC2016E25. the previous best model, multi-BiLSTM parser of Foland and Martin (2017). In order to disentangle individual phenomena, we use the AMR-evaluation tools (Damonte et al., 2017) and compare to systems which reported these scores (Table 2). We obtain the highest scores on most subtasks. The exception is negation detection. However, this is not too surprising as many negations are encoded with morphology, and character models, unlike our word-level model, are able to capture predictive morphological features (e.g., detect prefixes such as “un-” or “im-”). Now, we turn to ablation tests (see Table 3). First, we would like to see if our latent alignment framework is beneficial. In order to test this, we create a baseline version of our system (‘prealign’) which relies on the JAMR aligner (Flani404 long hours and lots of long nights op2 ARG1-of and 1 long 3 night 4 long 6 hour 2 op1 ARG1-of lot 5 quant Figure 4: When modeling concepts alone, the posterior probability of the correct (green) and wrong (red) alignment links will be the same. Ablation Concepts SRL Smatch 2 stages 85.6 68.9 73.6 2 stages, tune align 85.6 69.2 73.9 Full model 85.9 69.8 74.4 Table 4: Ablation studies: effect of joint modeling (all on R2). Scores on ablations are averaged over 2 runs. The first two models load the same concept and alignment model before the second stage. gan et al., 2014), rather than induces alignments as latent variables. Recall that in our model we used training data and a lemmatizer to produce candidates for the concept prediction model (see Section 2.3, the copy function). In order to have a fair comparison, if a concept is not aligned after JAMR, we try to use our copy function to align it. If an alignment is not found, we make the alignment uniform across the unaligned words. In preliminary experiments, we considered alternatives versions (e.g., dropping concepts unaligned by JAMR or dropping concepts unaligned after both JAMR and the matching heuristic), but the chosen strategy was the most effective. These scores of pre-align are superior to the results from Foland and Martin (2017) which also relies on JAMR alignments and uses BiLSTM encoders. There are many potential reasons for this difference in performance. For example, their relation identification model is different (e.g., single pass, no bi-affine modeling), they used much smaller networks than us, they use plain JAMR rather than a combination of JAMR and our copy function, they use a different recategorization system. These results confirm that we started with a strong basic model, and that our variational alignment framework provided further gains in performance. Now we would like to confirm that joint training of alignments with both concepts and relations is beneficial. In other words, we would like to see if alignments need to be induced in such a way Ablation Concepts SRL Smatch No Sinkhorn 85.7 69.3 73.8 No Sinkhorn reg 85.6 69.5 74.2 No soft loss 85.2 69.1 73.7 Full model 85.9 69.8 74.4 Table 5: Ablation studies: alignment modeling and relaxation (all on R2). Scores on ablations are averaged over 2 runs. as to benefit the relation identification task. For this ablation we break the full joint training into two stages. We start by jointly training the alignment model and the concept identification model. When these are trained, we optimizing the relation model but keep the concept identification model and alignment models fixed (‘2 stages’ in see Table 4). When compared to our joint model (‘full model’), we observe a substantial drop in Smatch score (-0.8%). In another version (‘2 stages, tune align’) we also use two stages but we fine-tune the alignment model on the second stage. This approach appears slightly more accurate but still -0.5% below the full model. In both cases, the drop is more substantial for relations (‘SRL’). In order to see why relations are potentially useful in learning alignments, consider Figure 4. The example contains duplicate concepts long. The concept prediction model factorizes over concepts and does not care which way these duplicates are aligned: correctly (green edges) or not (red edges). Formally, the true posterior under the conceptonly model in ‘2 stages’ assigns exactly the same probability to both configurations, and the alignment model Qψ will be forced to mimic it (even though it relies on an LSTM model of the graph). The spurious ambiguity will have a detrimental effect on the relation identification stage. It is interesting to see the contribution of other modeling decisions we made when modeling and relaxing alignments. First, instead of using Gumbel-Sinkhorn, which encourages mutuallyrepulsive alignments, we now use a factorized alignment model. Note that this model (‘No Sinkhorn’ in Table 5) still relies on (relaxed) discrete alignments (using Gumbel softmax) but does not constrain the alignments to be injective. A substantial drop in performance indicates that the prior knowledge about the nature of alignments appears beneficial. Second, we remove the additional regularizer for Gumbel-Sinkhorn approximation (equation (6)). The performance drop in 405 Smatch score (‘No Sinkhorn reg’) is only moderate. Finally, we show that using the simple hierarchical relaxation (equation (7)) rather than our softer version of the loss (equation (8)) results in a substantial drop in performance (‘No soft loss’, -0.7% Smatch). We hypothesize that the softer relaxation favors exploration of alignments and helps to discover better configurations. 5 Additional Related Work Alignment performance has been previously identified as a potential bottleneck affecting AMR parsing (Damonte et al., 2017; Foland and Martin, 2017). Some recent work has focused on building aligners specifically for training their parsers (Werling et al., 2015; Wang and Xue, 2017). However, those aligners are trained independently of concept and relation identification and only used at pre-processing. Treating alignment as discrete variables has been successful in some sequence transduction tasks with neural models (Yu et al., 2017, 2016). Our work is similar in that we also train discrete alignments jointly but the tasks, the inference framework and the decoders are very different. The discrete alignment modeling framework has been developed in the context of traditional (i.e. non-neural) statistical machine translation (Brown et al., 1993). Such translation models have also been successfully applied to semantic parsing tasks (e.g., (Andreas et al., 2013)), where they rivaled specialized semantic parsers from that period. However, they are considerably less accurate than current state-of-the-art parsers applied to the same datasets (e.g., (Dong and Lapata, 2016)). For AMR parsing, another way to avoid using pre-trained aligners is to use seq2seq models (Konstas et al., 2017; van Noord and Bos, 2017). In particular, van Noord and Bos (2017) used character level seq2seq model and achieved the previous state-of-the-art result. However, their model is very data demanding as they needed to train it on additional 100K sentences parsed by other parsers. This may be due to two reasons. First, seq2seq models are often not as strong on smaller datasets. Second, recurrent decoders may struggle with predicting the linearized AMRs, as many statistical dependencies are highly non-local. 6 Conclusions We introduced a neural AMR parser trained by jointly modeling alignments, concepts and relations. We make such joint modeling computationally feasible by using the variational autoencoding framework and continuous relaxations. The parser achieves state-of-the-art results and ablation tests show that joint modeling is indeed beneficial. We believe that the proposed approach may be extended to other parsing tasks where alignments are latent (e.g., parsing to logical form (Liang, 2016)). Another promising direction is integrating character seq2seq to substitute the copy function. This should also improve the handling of negation and rare words. Though our parsing model does not use any linearization of the graph, we relied on LSTMs and somewhat arbitrary linearization (depth-first traversal) to encode the AMR graph in our alignment model. A better alternative would be to use graph convolutional networks (Marcheggiani and Titov, 2017; Kipf and Welling, 2017): neighborhoods in the graph are likely to be more informative for predicting alignments than the neighborhoods in the graph traversal. Acknowledgments We thank Marco Damonte, Shay Cohen, Diego Marcheggiani and Wilker Aziz for helpful discussions as well as anonymous reviewers for their suggestions. The project was supported by the European Research Council (ERC StG BroadSem 678254) and the Dutch National Science Foundation (NWO VIDI 639.022.518). References Jacob Andreas, Andreas Vlachos, and Stephen Clark. 2013. Semantic parsing as machine translation. In Proceedings of the 51st Annual Meeting of the Association for Computational Linguistics (Volume 2: Short Papers), volume 2, pages 47–52. Yoav Artzi, Kenton Lee, and Luke Zettlemoyer. 2015. Broad-coverage CCG semantic parsing with AMR. In Proceedings of the 2015 Conference on Empirical Methods in Natural Language Processing, pages 1699–1710. Dzmitry Bahdanau, Kyunghyun Cho, and Yoshua Bengio. 2015. Neural machine translation by jointly learning to align and translate. International Conference on Learning Representations. 406 Laura Banarescu, Claire Bonial, Shu Cai, Madalina Georgescu, Kira Griffitt, Ulf Hermjakob, Kevin Knight, Philipp Koehn, Martha Palmer, and Nathan Schneider. 2013. Abstract Meaning Representation for Sembanking. J¨org Bornschein and Yoshua Bengio. 2015. Reweighted wake-sleep. International Conference on Learning Representations. Peter F. Brown, Vincent J. Della Pietra, Stephen A. Della Pietra, and Robert L. Mercer. 1993. The mathematics of statistical machine translation: Parameter estimation. Comput. Linguist., 19(2):263– 311. Jan Buys and Phil Blunsom. 2017. Oxford at semeval2017 task 9: Neural amr parsing with pointeraugmented attention. In Proceedings of the 11th International Workshop on Semantic Evaluation (SemEval-2017), pages 914–919. Association for Computational Linguistics. Marco Damonte, Shay B Cohen, and Giorgio Satta. 2017. An Incremental Parser for Abstract Meaning Representation. In Proceedings of the 15th Conference of the European Chapter of the Association for Computational Linguistics: Volume 1, Long Papers, volume 1, pages 536–546. Shibhansh Dohare and Harish Karnick. 2017. Text Summarization using Abstract Meaning Representation. arXiv preprint arXiv:1706.01678. Li Dong and Mirella Lapata. 2016. Language to logical form with neural attention. In Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), volume 1, pages 33–43. Timothy Dozat and Christopher D. Manning. 2017. Deep Biaffine Attention for Neural Dependency Parsing. International Conference on Learning Representations. Jeffrey Flanigan, Chris Dyer, Noah A. Smith, and Jaime Carbonell. 2016. CMU at SemEval-2016 Task 8: Graph-based AMR Parsing with Infinite Ramp Loss. In Proceedings of the 10th International Workshop on Semantic Evaluation (SemEval2016), pages 1202–1206. Association for Computational Linguistics. Jeffrey Flanigan, Sam Thomson, Jaime Carbonell, Chris Dyer, and Noah A. Smith. 2014. A Discriminative Graph-Based Parser for the Abstract Meaning Representation. In Proceedings of the 52nd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 1426– 1436, Baltimore, Maryland. Association for Computational Linguistics. William Foland and James H. Martin. 2017. Abstract Meaning Representation Parsing using LSTM Recurrent Neural Networks. In Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 463–472, Vancouver, Canada. Association for Computational Linguistics. Eric Jang, Shixiang Gu, and Ben Poole. 2017. Categorical reparameterization with gumbel-softmax. International Conference on Learning Representations. Bevan K. Jones, Jacob Andreas, Daniel Bauer, Karl Moritz Hermann, and Kevin Knight. 2012. Semantics-Based Machine Translation with Hyperedge Replacement Grammars. In COLING. Diederik P. Kingma and Jimmy Ba. 2014. Adam: A Method for Stochastic Optimization. International Conference on Learning Representations. Diederik P Kingma and Max Welling. 2014. Autoencoding variational bayes. International Conference on Learning Representations. Thomas N Kipf and Max Welling. 2017. Semisupervised classification with graph convolutional networks. International Conference on Learning Representations. Ioannis Konstas, Srinivasan Iyer, Mark Yatskar, Yejin Choi, and Luke Zettlemoyer. 2017. Neural AMR: Sequence-to-Sequence Models for Parsing and Generation. In Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 146–157, Vancouver, Canada. Association for Computational Linguistics. Percy Liang. 2016. Learning executable semantic parsers for natural language understanding. Communications of the ACM, 59(9):68–76. Fei Liu, Jeffrey Flanigan, Sam Thomson, Norman M. Sadeh, and Noah A. Smith. 2015. Toward Abstractive Summarization Using Semantic Representations. In HLT-NAACL. Edward Loper and Steven Bird. 2002. NLTK: The Natural Language Toolkit. In Proceedings of the ACL02 Workshop on Effective Tools and Methodologies for Teaching Natural Language Processing and Computational Linguistics - Volume 1, ETMTNLP ’02, pages 63–70, Stroudsburg, PA, USA. Association for Computational Linguistics. Thang Luong, Ilya Sutskever, Quoc Le, Oriol Vinyals, and Wojciech Zaremba. 2015. Addressing the rare word problem in neural machine translation. In Proceedings of the 53rd Annual Meeting of the Association for Computational Linguistics and the 7th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), pages 11–19, Beijing, China. Association for Computational Linguistics. Chris J Maddison, Andriy Mnih, and Yee Whye Teh. 2017. The concrete distribution: A continuous relaxation of discrete random variables. International Conference on Learning Representations. 407 Christopher D. Manning, Mihai Surdeanu, John Bauer, Jenny Finkel, Steven J. Bethard, and David McClosky. 2014. The Stanford CoreNLP Natural Language Processing Toolkit. In Association for Computational Linguistics (ACL) System Demonstrations, pages 55–60. Diego Marcheggiani, Anton Frolov, and Ivan Titov. 2017. A Simple and Accurate Syntax-Agnostic Neural Model for Dependency-based Semantic Role Labeling. In Proceedings of the 21st Conference on Computational Natural Language Learning (CoNLL 2017), pages 411–420, Vancouver, Canada. Association for Computational Linguistics. Diego Marcheggiani and Ivan Titov. 2017. Encoding Sentences with Graph Convolutional Networks for Semantic Role Labeling. In Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing, pages 1507–1516, Copenhagen, Denmark. Association for Computational Linguistics. Gonzalo Mena, David Belanger, Scott Linderman, and Jasper Snoek. 2018. Learning Latent Permutations with Gumbel-Sinkhorn Networks. International Conference on Learning Representations. Accepted as poster. Arindam Mitra and Chitta Baral. 2016. Addressing a question answering challenge by combining statistical methods with inductive rule learning and reasoning. In 30th AAAI Conference on Artificial Intelligence, AAAI 2016. AAAI press. Andriy Mnih and Karol Gregor. 2014. Neural variational inference and learning in belief networks. In Proceedings of the International Conference on Machine Learning. Rik van Noord and Johan Bos. 2017. Neural Semantic Parsing by Character-based Translation: Experiments with Abstract Meaning Representations. Computational Linguistics in the Netherlands Journal, 7:93–108. George Papandreou and Alan L Yuille. 2011. Perturband-map random fields: Using discrete optimization to learn and sample from energy models. In Computer Vision (ICCV), 2011 IEEE International Conference on, pages 193–200. IEEE. Adam Paszke, Sam Gross, Soumith Chintala, Gregory Chanan, Edward Yang, Zachary DeVito, Zeming Lin, Alban Desmaison, Luca Antiga, and Adam Lerer. 2017. Automatic differentiation in PyTorch. Xiaochang Peng, Chuan Wang, Daniel Gildea, and Nianwen Xue. 2017. Addressing the Data Sparsity Issue in Neural AMR Parsing. In Proceedings of the 15th Conference of the European Chapter of the Association for Computational Linguistics: Volume 1, Long Papers, pages 366–375. Association for Computational Linguistics. Jeffrey Pennington, Richard Socher, and Christopher D. Manning. 2014. GloVe: Global Vectors for Word Representation. In Empirical Methods in Natural Language Processing (EMNLP), pages 1532– 1543. Nima Pourdamghani, Yang Gao, Ulf Hermjakob, and Kevin Knight. 2014. Aligning english strings with abstract meaning representation graphs. In Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing (EMNLP), pages 425–429. M. Schuster and K.K. Paliwal. 1997. Bidirectional Recurrent Neural Networks. Trans. Sig. Proc., 45(11):2673–2681. Chuan Wang, Sameer Pradhan, Xiaoman Pan, Heng Ji, and Nianwen Xue. 2016. CAMR at SemEval2016 Task 8: An Extended Transition-based AMR Parser. In Proceedings of the 10th International Workshop on Semantic Evaluation (SemEval-2016), pages 1173–1178, San Diego, California. Association for Computational Linguistics. Chuan Wang and Nianwen Xue. 2017. Getting the Most out of AMR Parsing. In Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing, pages 1257–1268. Keenon Werling, Gabor Angeli, and Christopher D. Manning. 2015. Robust Subgraph Generation Improves Abstract Meaning Representation Parsing. In ACL. Lei Yu, Phil Blunsom, Chris Dyer, Edward Grefenstette, and Tomas Kocisky. 2017. The Neural Noisy Channel. In International Conference on Learning Representations. Lei Yu, Jan Buys, and Phil Blunsom. 2016. Online Segment to Segment Neural Transduction. In Proceedings of the 2016 Conference on Empirical Methods in Natural Language Processing, pages 1307– 1316. Association for Computational Linguistics. Jie Zhou and Wei Xu. 2015. End-to-end learning of semantic role labeling using recurrent neural networks. In Proceedings of the 53rd Annual Meeting of the Association for Computational Linguistics and the 7th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), volume 1, pages 1127–1137.
2018
37
Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Long Papers), pages 408–418 Melbourne, Australia, July 15 - 20, 2018. c⃝2018 Association for Computational Linguistics 408 Accurate SHRG-Based Semantic Parsing Yufei Chen, Weiwei Sun and Xiaojun Wan Institute of Computer Science and Technology, Peking University The MOE Key Laboratory of Computational Linguistics, Peking University {yufei.chen,ws,wanxiaojun}@pku.edu.cn Abstract We demonstrate that an SHRG-based parser can produce semantic graphs much more accurately than previously shown, by relating synchronous production rules to the syntacto-semantic composition process. Our parser achieves an accuracy of 90.35 for EDS (89.51 for DMRS) in terms of ELEMENTARY DEPENDENCY MATCH, which is a 4.87 (5.45) point improvement over the best existing data-driven model, indicating, in our view, the importance of linguistically-informed derivation for data-driven semantic parsing. This accuracy is equivalent to that of English Resource Grammar guided models, suggesting that (recurrent) neural network models are able to effectively learn deep linguistic knowledge from annotations. 1 Introduction Graph-structured semantic representations, e.g. Semantic Dependency Graphs (SDG; Clark et al., 2002; Ivanova et al., 2012), Elementary Dependency Structure (EDS; Oepen and Lønning, 2006), Abstract Meaning Representation (AMR; Banarescu et al., 2013), Dependency-based Minimal Recursion Semantics (DMRS; Copestake, 2009), and Universal Conceptual Cognitive Annotation (UCCA; Abend and Rappoport, 2013), provide a lightweight yet effective way to encode rich semantic information of natural language sentences (Kuhlmann and Oepen, 2016). Parsing to semantic graphs has been extensively studied recently. At the risk of oversimplifying, work in this area can be divided into three types, according to how much structural information of a target graph is explicitly modeled. Parsers of the first type throw an input sentence into a sequence-to-sequence model and leverage the power of deep learning technologies to obtain auxiliary symbols to transform the output sequence into a graph (Peng et al., 2017b; Konstas et al., 2017). The strategy of the second type is to gradually generate a graph in a greedy search fashion (Zhang et al., 2016; Buys and Blunsom, 2017). Usually, a transition system is defined to handle graph construction. The last solution explicitly associates each basic part with a target graph score, and casts parsing as the search for the graphs with highest sum of partial scores (Flanigan et al., 2014; Cao et al., 2017). Although many parsers achieve encouraging results, they are very hard for linguists to interpret and understand, partially because they do not explicitly model the syntacto-semantic composition process which is a significant characteristic of natural languages. In theory, Synchronous Hyperedge Replacement Grammar (SHRG; Drewes et al., 1997) provides a mathematically sound framework to construct semantic graphs. In practice, however, initial results on the utility of SHRG for semantic parsing were somewhat disappointing (Peng et al., 2015; Peng and Gildea, 2016). In this paper, we show that the performance that can be achieved by an SHRG-based parser is far higher than what has previously been demonstrated. We focus here on relating SHRG rules to the syntactosemantic composition process because we feel that information about syntax-semantics interface has been underexploited in the data-driven parsing architecture. We demonstrate the feasibility of inducing a high-quality, linguistically-informed SHRG from compositional semantic annotations licensed by English Resource Grammar (ERG; Flickinger, 2000), dubbed English Resource Semantics1 (ERS). Coupled with RNN-based pars1http://moin.delph-in.net/ErgSemantics 409 Model Grammar SDG EDS DMRS Data-driven NO 89.4 85.48 84.16 ERG-based Unification 92.80 89.58 89.64 SHRG-based Rewriting - 90.39 89.51 Table 1: Parsing accuracy of the best existing grammar-free and -based models as well as our SHRG-based model. Results are copied from (Oepen et al., 2015; Peng et al., 2017a; Buys and Blunsom, 2017). ing techniques, we build a robust SHRG parser that is able to produce semantic analysis for all sentences. Our parser achieves an accuracy of 90.35 for EDS and 89.51 for DMRS in terms of ELEMENTARY DEPENDENCY MATCH (EDM) which outperforms the best existing grammar-free model (Buys and Blunsom, 2017) by a significant margin (see Table 1). This marked result affirms the value of modeling the syntacto-semantic composition process for semantic parsing. On sentences that can be parsed by ERG-guided parsers, e.g. PET2 or ACE3, significant accuracy gaps between ERG-guided parsers and data-driven parsers are repeatedly reported (see Table 1). The main challenge for ERG-guided parsing is limited coverage. Even for treebanking on WSJ sentences from PTB, such a parser lacks analyses for c.a. 11% of sentences (Oepen et al., 2015). Our parser yields equivalent accuracy to ERG-guided parsers and equivalent coverage, full-coverage in fact, to data-driven parsers. We see this investigation as striking a balance between data-driven and grammar-driven parsing. It is not our goal to argue against the use of unification grammar in high-performance deep linguistic processing. Nevertheless, we do take it as a reflection of two points: (1) (recurrent) neural network models are able to effectively learn deep linguistic knowledge from annotations; (2) practical parsing may benefit from transforming a model-theoretic grammar into a generative-enumerative grammar. The architecture of our parser has potential uses beyond establishing a strong string-to-graph parser. Our grammar extraction algorithm has some freedom to induce different SHRGs following different linguistic hypothesis, and allows some issues in theoretical linguistics to be empirically investigated. In this paper, we examine the 2http://pet.opendfki.de/ 3http://sweaglesw.org/linguistics/ace/ HD-CMP arg1 arg1 SP-HD arg1 HD-CMP N bv D arg2 arg1 V arg1 N bv D arg1 arg1 _go_v_1 _boy_n_1 bv _some_q arg2 arg1 _want_v_1 S HD-CMP Figure 1: A partial rewriting process of HRG on the semantic graph associated with “Some boys want to go.” Lowercase symbols indicate terminal edges, while bold, uppercase symbols indicate nonterminal edges. Red edges are the hyperedges that will be replaced in the next step, while the blue edges in the next step constitute their corresponding RHS graphs. lexicalist/constructivist hypothesis, a divide across a variety of theoretical frameworks, in an empirical setup. The lexicalist tradition traces its origins to Chomsky (1970) and is widely accepted by various computational grammar formalisms, including CCG, LFG, HPSG and LTAG. A lexicalist approach argues that the lexical properties of words determine their syntactic and semantic behaviors. The constructivist perspective, e.g. Borer’s ExoSkeletal approach (2005b; 2005a; 2013), emphasizes the role of syntax in constructing meanings. In this paper, we focus on lexicalist and constructivist hypotheses for syntacto-semantic composition. We present our computation-oriented analysis in §6. Under the architecture of our neural parser, a construction grammar works much better than a lexicalized grammar. Our parser is available at https://github. com/draplater/hrg-parser/. 2 Hyperedge Replacement Grammar Hyperedge replacement grammar (HRG) is a context-free rewriting formalism for graph generation (Drewes et al., 1997). An edge-labeled, directed hypergraph is a tuple H = ⟨V, E, l, X⟩, where V is a finite set of nodes, and E ⊆V + is a finite set of hyperedges. A hyperedge is an extension of a normal edge which can connect to more than two nodes or only one node. l : E →L 410 Algorithm 1 Hyperedge Replacement Grammar Extraction Algorithm Require: Input syntactic tree T, hypergraph g 1: RULES ←{} 2: for tree node n in postorder traversal of T do Ensure: Rewriting rule of node n is A →B + C, spans of node A, B, C are SPAN-A, SPAN-B, SPAN-C 3: SPANS ←{SPAN-A, SPAN-B, SPAN-C} 4: C-EDGES ←{e|e ∈EDGES(g) ∧SPAN(e) ∈SPANS} 5: ALL-NODES ←{s|s ∈NODES(g) ∧∃e ∈C-EDGES s.t. s ∈NODES(e)} 6: S-EDGES ←{e|e ∈EDGES(g) ∧e is structual edge ∧∀s ∈NODES(e) =⇒s ∈C-EDGES} 7: ALL-EDGES = C-EDGES ∪S-EDGES 8: INTERNAL-NODES ←{} 9: EXTERNAL-NODES ←{} 10: for node s in ALL-NODES do 11: if ∀e ∈EDGES(g), s ∈NODES(e) =⇒e ∈ALL-EDGES then 12: INTERNAL-NODES ←INTERNAL-NODES ∪{s} 13: else 14: EXTERNAL-NODES ←EXTERNAL-NODES ∪{s} 15: end if 16: end for 17: RULES ←RULES ∪{(A, ALL-EDGES, INTERNAL-NODES, EXTERNAL-NODES)} 18: end for assigns a label from a finite set L to each edge. X ∈V ∗defines an ordered list of nodes, i.e., external nodes, which specify the connecting parts when replacing a hyperedge. An HRG G = ⟨N, T, P, S⟩is a graph rewriting system, where N and T are two disjoint finite sets of nonterminal and terminal symbols respectively. S ∈N is the start symbol. P is a finite set of productions of the form A →R, where the left hand side (LHS) A ∈N, and the right hand side (RHS) R is a hypergraph with edge labels over N ∪T. The rewriting process replaces a nonterminal hyperedge with the graph fragment specified by a production’s RHS, attaching each external node to the matched node of the corresponding LHS. An example is shown in Figure 1. Following Chiang et al. (2013), we make the nodes only describe connections between edges and store no other information. A synchronous grammar defines mappings between different grammars. Here we focus on relating a string grammar, CFG in our case, to a graph grammar, i.e., HRG. SHRG can be represented as tuple G = ⟨N, T, T ′, P, S⟩. N is a finite set of nonterminal symbols in both CFG and HRG. T ′ and T are finite sets of terminal symbols in CFG and HRG, respectively. S ∈N is the start symbol. P is a finite set of productions of the form A →⟨R, R′, ∼⟩, where A ∈N, R is a hypergraph fragment with edge labels over N ∪T, and R′ is a symbol sequence over N ∪T ′. ∼is a mapping between the nonterminals in R and R′. When a coherent CFG derivation is ready, we can interpret it using the corresponding HRG and get a semantic graph. 3 Grammar Extraction 3.1 Graph Representations for ERS ERS are richly detailed semantic representations produced by the ERG, a hand-crafted, linguistically-motivated HPSG grammar for English. Beyond basic predicate–argument structures, ERS also includes other information about various complex phenomena such as the distinction between scopal and non-scopal arguments, conditionals, comparatives, and many others. ERS are in the formalism of Minimal Recursion Semantics (MRS; Copestake et al., 2005), but can be expressed in different ways. Semantic graphs, including EDS and DMRS, can be reduced from the standard feature structure encoded representations, with or without a loss of information. In this paper, we conduct experiments on ERS data, but our grammar extraction algorithm and the parser are not limited to ERS. One distinguished characteristic of ERS is that the construction of ERS strictly follows the prin411 N(1,2) bv D(0,1) arg1 V(4,5) arg1 arg2 V(2,3) _boy_n_1(1,2) bv _some_q(0,1) arg1 _go_v_1(4,5) arg1 arg2 _want_v_1(2,3) SP-HD(0,2) arg1 V(4,5) arg1 arg2 V(2,3) SP-HD(0,2) arg1 arg1 HD-CMP(2,6) S↓SB-HD(0,6) ① ② ③ ④ ⑤ SP-HD(0,2) arg1 HD↓V(4,6) arg1 arg2 V(2,3) SP-HD(0,2) arg1 HD-CMP(3,6) arg1 arg2 V(2,3) S SB-HD SP-HD D some N boys HD-CMP V want HD-CMP CM to HD V V go PUNCT . 0 1 2 3 4 5 ① ② ③ ④ ⑤ x y z { | Shared LHS SP-HD HD↓V HD-CMP HD-CMP S↓SP-HD RHS (syntax) D + N V + PUNCT CM + HD↓V V + HD-CMP SP-HD + HD-CMP RHS (semantics) N bv D V HD↓V HD-CMP arg2 V arg1 HD-CMP SP-HD arg1 Figure 2: The grammar extraction process of the running example. Conceptual edges which are directly aligned with the syntactic rules are painted in green. The span-based alignment is shown in the parentheses. Structural edges that connect conceptual edges are painted in brown. Green edges and brown edges together form the subgraph, which acts as RHS in the HRG rule. External nodes are represented as solid dots. ciple of compositionality (Bender et al., 2015). A precise syntax-semantics interface is introduced to guarantee compositionality and therefore all meaning units can be traced back to linguistic signals, including both lexical and constructional ones. Take Figure 2 for example. Every concept, e.g. the existence quantifier some q, is associated with a surface string. We favor such correspondence not because it eases extraction of SHRGs, but because we emphasize sentence meanings that are from forms. The connection between syntax (sentence form) and semantics (word and sentence meaning) is fundamental to the study of language. 3.2 The Algorithm We introduce a novel SHRG extraction algorithm, which requires and only requires alignments between conceptual edges and surface strings. A tree is also required, but this tree does not have to be a gold-standard syntactic tree. All trees that are compatible with an alignment can be used. The syntactic part of DeepBank is a phrase structure which describes HPSG derivation. The vast majority of syntactic rules in DeepBank are binary, and the rest are unary. In §5, we report evaluation results based on DeepBank trees. A conceptual graph is composed by two kinds of edges: 1) conceptual edges that carry semantic concept information and are connected with only one node, and 2) structural edges that build relationships among concepts by connecting nodes. The grammar extraction process repeatedly replaces a subgraph with a nonterminal hyperedge, defining the nonterminal symbol as LHS and the subgraph as RHS. The key problem is to identify an appropriate subgraph in each step. To this end, we take advantage of DeepBank’s accurate and fine-grained alignments between the surface string in syntactic tree and concepts in semantic graphs. To extract the HRG rule synchronized with the syntactic rewriting rule A →B + C, we assume that conceptual edges sharing common spans with A, B or C are in the same subgraph. This subgraph acts as the RHS of the HRG rule. We make the extraction process go in the direction of postorder traversal of the syntactic tree, to ensure that all sub-spans of A, B or C are already replaced with hyperedges. We then add the structural edges that connect the above conceptual edges to RHS. After the subgraph is identified, it is easy to distinguish between internal nodes and external nodes. 412 If all edges connected to a node are in the subgraph, this node is an internal node. Otherwise, it is external node. Finally, the subgraph is replaced with a nonterminal edge. Algorithm 1 presents a precise demonstration and Figure 2 illustrates an example. 4 A Neural SHRG Parser Under the SHRG formalism, semantic parsing can be divided into two steps: syntactic parsing and semantic interpretation. Syntactic parsing utilizes the CFG part to get a derivation that is shared by the HRG part. At one derivation step, there may be more than one HRG rule applicable. In this case, we need a semantic disambiguation model to choose a good one. 4.1 Syntactic Parsing Following the LSTM-Minus approach proposed by Cross and Huang (2016), we build a constituent parser with a CKY decoder. We denote the output vectors of forward and backward LSTM as fi and bi. The feature si,j of a span (i, j) can be calculated from the differences of LSTM encodings: si,j = (fj −fi) ⊕(bi −bj) The operator ⊕indicates the concatenation of two vectors. Constituency parsing can be regarded as predicting scores for spans and labels, and getting the best syntactic tree with dynamic programming. Following Stern et al. (2017)’s approach, We calculate the span scores SCOREspan(i, j) and labels scores SCORElabel(i, j, l) from si,j with multilayer perceptrons (MLPs): SCOREspan(i, j) = MLPspan(si,j) SCORElabel(i, j, l) = MLPlabel(si,j)[l] x[i] indicates the ith element of a vector x. We condense the unary chains into one label to ensure that only one rule is corresponds with a specific span. Because the construction rules from DeepBank are either unary or binary, we do not deal with binarization. Because the SHRG synchronizes at rule level, we need to restrict the parser to ensure that the output agrees with the known rules. The restriction can be directly added into the CKY decoder. To simplify the semantic interpretation process, we add extra label information to enrich the nonterminals in CFG rules. In particular, we consider the count of external nodes of a corresponding HRG rule. For example, the LHS of rule { in Figure 2 will be labeled as “HD-CMP#2”, since the RHS of its HRG counterpart has two external nodes. 4.2 Semantic Interpretation When a phrase structure tree, i.e., a derivation tree, T is available, semantic interpretation can be regarded as translating T to the derivation of graph construction by assigning a corresponding HRG rule to each syntactic counterpart. Our approach to finding the optimal HRG rule combination ˆR = {r1, r2, ...} from the search space R(T): ˆR = argmaxR∈R(T)SCORE(R|T) (1) To solve this optimization problem, we implement a greedy search decoder and a bottom-up beam search decoder. The final semantic graph G is read off from ˆR. 4.2.1 Greedy Search Model In this model, we assume that each HRG rule is selected independently of the others. The score of G is defined as the sum of all rule scores: SCORE(R = {r1, r2, ...}|T) = X r∈R SCORE(r|T) The maximization of the graph score can be decomposed into the maximization of each rule score. SCORE(r|T) can be calculated in many ways. Count-based approach is the simplest one, where the rule score is estimated by its frequency in the training data. We also evaluate a sophisticated scoring method, i.e., training a classifier based on rule embedding: SCORE(r|T) = MLP(si,j ⊕r) Inspired by the bag-of-words model, we represent the rule as bag of edge labels. The i-th position in r indicates the number of times the i-th label appears in the rule. 4.2.2 Bottom-Up Beam Search Model We can also leverage structured prediction to approximate SCORE(R|T) and employ principled decoding algorithms to solve the optimization problem (1). We propose a factorization model to assign scores to the graph and subgraphs in the intermediate state. The score of a certain graph can 413 be seen as the sum of each factor score. SCORE(R|T) = X i∈PART(R,T) SCOREPART(i) We use predicates and arguments as factors for scoring. There are two kinds of factors: 1) A conceptual edge aligned with span (i, j) taking predicate name p. We use the span embedding si,j as features, and scoring with non-linear transformation: SCOREPARTpred(i, j, p) = MLPpred(si,j)[p] 2) A structural edge with label L connects with predicates pa and pb, which are aligned with spans (i1, j1) and (i2, j2) respectively. We use the span embedding si1,j1, si2,j2 and random initialized predicate embedding pa, pb as features, and scoring with non-linear transformation: SCOREPARTarg(i1, j1, i2, j2, pa, pb, L) = MLParg(si1,j1 ⊕si2,j2 ⊕pa ⊕pb)[L] We assign a beam to each node in the syntactic tree. To ensure that we always get a subgraph which does not contain any nonterminal edges during the search process, we perform the beam search in the bottom-up direction. We only reserve top k subgraphs in each beam. Figure 3 illustrates the process. 4.3 Training The objective of training is to make the score of the correct graph higher than incorrect graphs. We use the score difference between the correct graph Rg and the highest scoring incorrect graph as the loss: loss = max ˆR̸=RgSCORE( ˆR|T)−SCORE(Rg|T) Following (Kiperwasser and Goldberg, 2016)’s experience of loss augmented inference, in order to update graphs which have high model scores but are very wrong, we augment each factor belonging to the gold graph by adding a penalty term c to its score. Finally the loss term is: loss = SCORE(Rg|T) − X i∈PART(Rg,T) c− max(SCORE( ˆR|T) − X i∈PART( ˆR,T)∩PART(Rg,T) c) Some boys want to go . _boy_n_1 _boy_n_1 bv _some_q Ø Ø _go_v_1 arg2 _want_v_1 arg1 _go_v_1 _boy_n_1 bv _some_q arg2 arg1 _want_v_1 _some_q _want_v_1 _go_v_1 N bv D V HD↓V HD-CMP arg2 V arg1 HD-CMP SP-HD arg1 _go_v_1 _go_v_1 Figure 3: The semantic interpretation process. The interpretation performs bottom-up beam search to get a bunch of high-scored subgraphs for each node in the derivation tree. 5 Experiments 5.1 Set-up DeepBank is an annotation of the Penn TreeBank Wall Street Journal which is annotated under the formalism of HPSG. We use DeepBank version 1.1, corresponding to ERG 1214, and use the standard data split. Therefore the numeric performance can be directly compared to results reported in Buys and Blunsom (2017). We use the pyDelphin library to extract DMRS and EDS graphs and use the tool provided by jigsaw4 to separate punctuation marks from the words they attach to. We use DyNet5 to implement our neural models, and automatic batch technique (Neubig et al., 2017) in DyNet to perform mini-batch gradient descent training. The detailed network hyper-parameters can be seen in Table 2. The same pre-trained word embedding as (Kiperwasser and Goldberg, 2016) is employed. 5.2 Results of Grammar Extraction DeepBank provides fine-grained syntactic trees with rich information. For example, the label SP-HD HC C denotes that this is a “head+specifier” construction, where the semantic head is also the syntactic head. But there 4www.coli.uni-saarland.de/˜yzhang/ files/jigsaw.jar 5https://github.com/clab/dynet 414 Hyperparamter Value Batch size 32 Pre-trained word embedding dimension 100 Random-initialized word embedding dimension 150 LSTM Layer count 2 LSTM dimension (each direction) 250 MLP hidden layer count 1 MLP hidden layer dimension 250 penalty term c 1 Table 2: Hyperparamters used in the experiments. #EP #Rule #Instance Fine Coarse Unlabeled EDS 1 49689 14234 1476 676817 2 9616 3424 488 64708 3 2739 1486 280 11195 4 1059 732 248 2071 5+ 508 418 251 655 DMRS 1 50668 15745 2688 657999 2 11428 4418 896 79888 3 3576 1929 465 14237 4 1237 873 299 2561 5+ 669 557 297 901 Table 3: Statistics of SHRG rules with different label type by the count of external points in EDS and DMRS representations. is also the potential for data sparseness. In our experiments, we extract SHRG with three kinds of labels: fine-grained labels, coarse-grained labels and single Xs (meaning unlabeled parsing). The fine-grained labels are the original labels, namely fine-grained construction types. We use the part before the first underscore of each label, e.g. SP-HD, as a coarse-grained label. The coarse-grained labels are more like the highly generalized rule schemata proposed by Pollard and Sag (1994). Some statistics are shown in Table 3. Instead of using gold-standard trees to extract a synchronous grammar, we also tried randomlygenerated alignment-compatible trees. The result is shown in Table 4. Gold standard trees exhibit a low entropy, indicating a high regularity. 5.3 Results of Syntactic Parsing In addition to the standard evaluation method for phrase-structure parsing, we find a more suitable measurement, i.e. condensed score, for our task. Because we condense unary rule chains into one label and extract synchronous grammar under this condensed syntactic tree, it is better to calculate the correctness of the condensed label rather than Tree Type 1 2 3 4 5+ Gold 1476 488 280 248 251 Fuzzy 1 12710 7591 7963 6578 8998 Fuzzy 2 13606 7355 7228 6090 9112 Fuzzy 3 12278 8228 8462 7039 9946 Table 4: Comparison of grammars extracted from unlabeled gold trees and randomly-generated alignment-compatible trees (”fuzzy” trees). Label Standard Condensed P R F POS BCKT POS Fine 90.81 91.19 91.00 94.40 87.09 92.98 Coarse 90.78 91.24 91.01 98.30 87.93 95.98 Table 5: Accuracy of syntactic parsing under different labels on development data. We add the count of external nodes of corresponding HRG rule. “POS” concerns the prediction of preterminals, while “BCKT” denotes bracketing. a single label. The additional label “#N” that indicates the number of external points is also considered in our condensed score evaluation method. The result is shown in Table 5. 5.4 Results of Semantic Interpretation Dridan and Oepen (2011) proposed the EDM metric to evaluate the performance the ERS-based graphs. EDM uses the alignment between the nodes in a graph and the spans in a string to detect the common parts between two graphs. It converts the predicate and predicate–argument relationship to comparable triples and calculates the correctness in these triples. A predicate of label L and span S is denoted as triple (S, NAME, L) and a relationship R between the predicate labelled P and argument labelled A is denoted as triple (P, R, A). We calculate the F1 value of the total triples as EDM score. Similarity, we compute the F1 score of only predicate triples and only the relation triples as EDMP and EDMA. We reuse the word embeddings and bidirectional LSTM in the trained syntactic parsing model to extract span embedding si,j. The results of the count-based model, rule embedding model and structured model with beam decoder are summarized in Table 6. We report the standard EDM metrics. The count-based model can achieve considerably good results, showing the correctness of our grammar extraction method. We also try different labels for the syntactic trees. The results 415 Model EDMP EDMA EDM Count Based 90.12 81.96 86.03 Rule Embedding 93.41 84.84 89.11 Beam Search 93.48 87.88 90.67 Table 6: The EDM score on EDS development data with different model: count based greedy search, rule embedding greedy search and beam search. We use syntactic trees with coarse-grained labels. Data Label EDMP EDMA EDM EDS Fine 92.70 87.77 90.23 Coarse 93.48 87.88 90.67 DMRS Fine 92.52 86.47 89.46 Coarse 93.60 86.62 90.07 Table 7: Accuracy on the development data under different labels of syntactic tree and beam search. are shown in Table 7. Models based on coarsegrained labels achieve optimal performance. The results on test set of EDS data are shown in Table 8. We achieve state-of-the-art performance with a remarkable improvement over Buys and Blunsom (2017)’s neural parser. 6 On Syntax-Semantics Interface In this paper, we empirically study the lexicalist/constructivist hypothesis, a divide across a variety of theoretical frameworks, taking semantic parsing as a case study. Although the original grammar that guides the annotation of ERS data, namely ERG, is highly lexicalized in that the majority of information is encoded in lexical entries (or lexical rules) as opposed to being represented in constructions (i.e., rules operating on phrases), our grammar extraction algorithm has some freedom to induce different SHRGs that choose between the lexicalist and constructivist approaches. We modify algorithm 1 to follow the key insights of the lexicalist approach. This is done by considering all outgoing edges when finding the subgraph of the lexical rules. The differences between two kinds of grammars is shown in Table 9. Different grammars allow the lexicalist/constructivist issue in theoretical linguistics to be empirically examined. The comparison of the counts of rules in each grammar is summarized in Table 11, from which we can see that the sizes of the grammars are comparable. However, the parsing results are quite different, as shown Model EDMP EDMA EDM EDS Buys and Blunsom 88.14 82.20 85.48 ACE 91.82 86.92 89.58 Ours 93.15 87.59 90.35 DMRS Buys and Blunsom 87.54 80.10 84.16 ACE 92.08 86.77 89.64 Ours 93.11 86.01 89.51 Table 8: Accuracy on the test set. We use syntactic trees of coarse-grained labels and beam search. in Table 10. A construction grammar works much better than a lexicalized grammar under the architecture of our neural parser. We take this comparison as informative since lexicalist approaches are more widely accepted by various computational grammar formalisms, including CCG, LFG, HPSG and LTAG. We think that the success of applying SHRG to resolve semantic parsing highly relies on the compositionality nature of ERS’ sentence-level semantic annotation. This is the property that makes sure the extracted rules are consistent and regular. Previous investigation by Peng et al. (2015) on SHRG-based semantic parsing utilizes AMRBank which lacks this property to some extent (see Bender et al.’s argument). We think this may be one reason for the disappointing parsing performance. Think about the AMR graph associated “John wants Bob to believe that he saw him.” The AMR’s annotation for co-reference is a kind of non-compositional, speaker meaning, and results in grammar sparseness. 7 On Deep Linguistic Knowledge Semantic annotations have a tremendous impact on semantic parsing. In parallel with developing new semantic annotations, e.g. AMRBank, there is a resurgence of interest in exploring existing annotations grounded under deep grammar formalisms, such as semantic analysis provided by ERS (Flickinger, 2000). In stark contrast, it seems that only the annotation results gain interests, but not the core annotation engine—knowledgeextensive grammar. The tendency to continually ignore the positive impact of precision grammar on semantic parsing is somewhat strange. For sentences that can be parsed by an ERG-guided parser, there is a significant accuracy gap which is repeatedly reported. See Table 1 for recent results. The main challenges for precision grammar-guided parsing are unsat416 Lexicon Construction Lexicalized CFG Counterpart Construction Lexicalized some _some_q _some_q bv SP-HD →D + N N bv D N D want _want_1 _want_1 arg1 arg2 HD-CMP →V + HD-CMP HD-CMP arg2 V HD-CMP V go _go_1 _go_1 arg1 S↓SP-HD →SP-HD + HD-CMP arg1 HD-CMP SP-HD arg1 HD-CMP SP-HD Table 9: Rules of lexicalized and construction grammars that are extracted from the running example. Grammar EDMP EDMA EDM Construction 93.48 87.88 90.67 Lexicalized 92.14 81.05 86.63 Table 10: The EDM score on EDS development data with construction grammar and lexicalized grammar using syntax trees of coarse-grained labels and beam search. Grammar 1 2 3 4 5+ Construction 14234 3424 1486 732 418 Lexicalized 11653 5938 2358 396 11 Table 11: Comparison of the construction grammar and the lexicalized grammar extracted from EDS data. We use syntax trees of coarse-grained labels. isfactory coverage and efficiency that limit their uses in NLP applications. Even for treebanking on newswire data, i.e., Wall Street Journal data from Penn TreeBank (Marcus et al., 1993), ERG lacks analyses for c.a. 11% of sentences (Oepen et al., 2015). For text data from the web, e.g. tweets, this problem is even more serious. Moreover, checking all possible linguistic constraints makes a grammar-guided parser too slow for many realistic NLP applications. Robustness and efficiency, thus, are two major problems for practical NLP applications. Recent encouraging progress achieved with purely data-driven models helps resolve the above two problems. Nevertheless, it seems too radical to remove all explicit linguistic knowledge about the syntacto-semantic composition process, the key characteristics of natural languages. In this paper, we introduce a neural SHRG-based semantic parser that strikes a balance between datadriven and grammar-guided parsing. We encode deep linguistic knowledge partially in a symbolic way and partially in a statistical way. It is worth noting that the symbolic system is a derivational, generative-enumerative grammar, while the origin of the data source is grounded under a representational, model-theoretic grammar. While grammar writers may favor the convenience provided by a unification grammar formalism, a practical parser may re-use algorithms by another formalism by translating grammars through data. Experiments also suggest that (recurrent) neural network models are able to effectively gain some deep linguistic knowledge from annotations. 8 Conclusion The advantages of using graph grammars to resolve semantic parsing is clear in concept but underexploited in practice. Here, we have shown ways to improve SHRG-based string-to-semanticgraph parsing. Especially, we emphasize the importance of modeling syntax-semantic interface and the compositional property of semantic annotations. Just like recent explorations on many other NLP tasks, we also show that neural network models are very powerful to advance deep language understanding. Acknowledgments This work was supported by the National Natural Science Foundation of China (61772036, 61331011) and the Key Laboratory of Science, Technology and Standard in Press Industry (Key Laboratory of Intelligent Press Media Technology). We thank the anonymous reviewers for their helpful comments. Weiwei Sun is the corresponding author. References Omri Abend and Ari Rappoport. 2013. Universal conceptual cognitive annotation (UCCA). In Proceedings of the 51st Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers). Association for Computational Linguistics, 417 Sofia, Bulgaria, pages 228–238. http://www. aclweb.org/anthology/P13-1023. Laura Banarescu, Claire Bonial, Shu Cai, Madalina Georgescu, Kira Griffitt, Ulf Hermjakob, Kevin Knight, Philipp Koehn, Martha Palmer, and Nathan Schneider. 2013. Abstract Meaning Representation for Sembanking. In Proceedings of the 7th Linguistic Annotation Workshop and Interoperability with Discourse. Association for Computational Linguistics, Sofia, Bulgaria, pages 178–186. http:// www.aclweb.org/anthology/W13-2322. Emily M. Bender, Dan Flickinger, Stephan Oepen, Woodley Packard, and Ann A. Copestake. 2015. Layers of interpretation: On grammar and compositionality. In Proceedings of the 11th International Conference on Computational Semantics, IWCS 2015, 15-17 April, 2015, Queen Mary University of London, London, UK. pages 239– 249. http://aclweb.org/anthology/W/ W15/W15-0128.pdf. H. Borer. 2005a. In Name Only. Hagit Borer. Oxford University Press. https://books.google. com/books?id=cAEmAQAAIAAJ. H. Borer. 2005b. The Normal Course of Events. Hagit Borer. Oxford University Press. https://books.google.com/books?id= M48UPLst_MQC. H. Borer. 2013. Structuring Sense: Volume III: Taking Form. Borer, Hagit. OUP Oxford. https://books.google.com/books? id=tUkGAQAAQBAJ. Jan Buys and Phil Blunsom. 2017. Robust incremental neural semantic graph parsing. In Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers). Association for Computational Linguistics, Vancouver, Canada, pages 1215–1226. http://aclweb. org/anthology/P17-1112. Junjie Cao, Sheng Huang, Weiwei Sun, and Xiaojun Wan. 2017. Parsing to 1-endpoint-crossing, pagenumber-2 graphs. In Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers). Association for Computational Linguistics, Vancouver, Canada, pages 2110–2120. http://aclweb. org/anthology/P17-1193. David Chiang, Jacob Andreas, Daniel Bauer, Karl Moritz Hermann, Bevan Jones, and Kevin Knight. 2013. Parsing graphs with Hyperedge Replacement Grammars. In Proceedings of the 51st Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers). Association for Computational Linguistics, Sofia, Bulgaria, pages 924–932. http://www. aclweb.org/anthology/P13-1091. Noam Chomsky. 1970. Remarks on nominalization. In R. A. Jacobs and P. S. Rosenbaum, editors, Readings in English Transformational Grammar, Waltham, MA, pages 170–221. Stephen Clark, Julia Hockenmaier, and Mark Steedman. 2002. Building deep dependency structures using a wide-coverage CCG parser. In Proceedings of the 40th Annual Meeting of the Association for Computational Linguistics, July 6-12, 2002, Philadelphia, PA, USA.. pages 327–334. http://www. aclweb.org/anthology/P02-1042.pdf. Ann Copestake. 2009. Invited Talk: slacker semantics: Why superficiality, dependency and avoidance of commitment can be the right way to go. In Proceedings of the 12th Conference of the European Chapter of the ACL (EACL 2009). Association for Computational Linguistics, Athens, Greece, pages 1– 9. http://www.aclweb.org/anthology/ E09-1001. Ann Copestake, Dan Flickinger, Carl Pollard, and Ivan A. Sag. 2005. Minimal Recursion Semantics: An introduction. Research on Language and Computation pages 281–332. James Cross and Liang Huang. 2016. Span-based constituency parsing with a structure-label system and provably optimal dynamic oracles. In Proceedings of the 2016 Conference on Empirical Methods in Natural Language Processing. Association for Computational Linguistics, Austin, Texas, pages 1–11. https://aclweb.org/anthology/ D16-1001. F. Drewes, H.-J. Kreowski, and A. Habel. 1997. Hyperedge Replacement Graph Grammars. In Grzegorz Rozenberg, editor, Handbook of Graph Grammars and Computing by Graph Transformation, World Scientific Publishing Co., Inc., River Edge, NJ, USA, pages 95–162. http://dl.acm.org/ citation.cfm?id=278918.278927. Rebecca Dridan and Stephan Oepen. 2011. Parser evaluation using elementary dependency matching. In Proceedings of the 12th International Conference on Parsing Technologies. Dublin, Ireland, pages 225– 230. Jeffrey Flanigan, Sam Thomson, Jaime Carbonell, Chris Dyer, and Noah A. Smith. 2014. A discriminative graph-based parser for the Abstract Meaning Representation. In Proceedings of the 52nd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers). Association for Computational Linguistics, Baltimore, Maryland, pages 1426–1436. http://www.aclweb. org/anthology/P14-1134. Dan Flickinger. 2000. On building a more efficient grammar by exploiting types. Nat. Lang. Eng. 6(1):15–28. 418 Angelina Ivanova, Stephan Oepen, Lilja Øvrelid, and Dan Flickinger. 2012. Who did what to whom? A contrastive study of syntacto-semantic dependencies. In Proceedings of the Sixth Linguistic Annotation Workshop. Jeju, Republic of Korea, pages 2–11. Eliyahu Kiperwasser and Yoav Goldberg. 2016. Simple and accurate dependency parsing using bidirectional LSTM feature representations. Transactions of the Association for Computational Linguistics 4:313–327. https://transacl.org/ojs/ index.php/tacl/article/view/885. Ioannis Konstas, Srinivasan Iyer, Mark Yatskar, Yejin Choi, and Luke Zettlemoyer. 2017. Neural amr: Sequence-to-sequence models for parsing and generation. In Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers). Association for Computational Linguistics, Vancouver, Canada, pages 146–157. http://aclweb. org/anthology/P17-1014. Marco Kuhlmann and Stephan Oepen. 2016. Towards a catalogue of linguistic graph banks. Computational Linguistics 42(4):819–827. Mitchell P. Marcus, Mary Ann Marcinkiewicz, and Beatrice Santorini. 1993. Building a large annotated corpus of English: the penn treebank. Computational Linguistics 19(2):313– 330. http://dl.acm.org/citation.cfm? id=972470.972475. Graham Neubig, Yoav Goldberg, and Chris Dyer. 2017. On-the-fly operation batching in dynamic computation graphs. In Advances in Neural Information Processing Systems. Stephan Oepen, Marco Kuhlmann, Yusuke Miyao, Daniel Zeman, Silvie Cinkov´a, Dan Flickinger, Jan Hajic, and Zdenka Uresov´a. 2015. Semeval 2015 task 18: Broad-coverage semantic dependency parsing. In Proceedings of the 9th International Workshop on Semantic Evaluation (SemEval 2015). Stephan Oepen and Jan Tore Lønning. 2006. Discriminant-based mrs banking. In Proceedings of the Fifth International Conference on Language Resources and Evaluation (LREC-2006). European Language Resources Association (ELRA), Genoa, Italy. ACL Anthology Identifier: L06-1214. Hao Peng, Sam Thomson, and Noah A. Smith. 2017a. Deep multitask learning for semantic dependency parsing. In Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers). Association for Computational Linguistics, Vancouver, Canada, pages 2037– 2048. http://aclweb.org/anthology/ P17-1186. Xiaochang Peng and Daniel Gildea. 2016. Uofr at semeval-2016 task 8: Learning Synchronous Hyperedge Replacement Grammar for AMR parsing. In Proceedings of the 10th International Workshop on Semantic Evaluation (SemEval-2016). Association for Computational Linguistics, San Diego, California, pages 1185–1189. http://www.aclweb. org/anthology/S16-1183. Xiaochang Peng, Linfeng Song, and Daniel Gildea. 2015. A Synchronous Hyperedge Replacement Grammar based approach for AMR parsing. In Proceedings of the Nineteenth Conference on Computational Natural Language Learning. Association for Computational Linguistics, Beijing, China, pages 32–41. http://www.aclweb.org/ anthology/K15-1004. Xiaochang Peng, Chuan Wang, Daniel Gildea, and Nianwen Xue. 2017b. Addressing the data sparsity issue in neural amr parsing. In Proceedings of the 15th Conference of the European Chapter of the Association for Computational Linguistics: Volume 1, Long Papers. Association for Computational Linguistics, Valencia, Spain, pages 366–375. http://www.aclweb.org/ anthology/E17-1035. Carl Pollard and Ivan A. Sag. 1994. Head-Driven Phrase Structure Grammar. The University of Chicago Press, Chicago. Mitchell Stern, Jacob Andreas, and Dan Klein. 2017. A minimal span-based neural constituency parser. In Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers). Association for Computational Linguistics, Vancouver, Canada, pages 818– 827. http://aclweb.org/anthology/ P17-1076. Xun Zhang, Yantao Du, Weiwei Sun, and Xiaojun Wan. 2016. Transition-based parsing for deep dependency structures. Computational Linguistics 42(3):353–389. http://aclweb.org/ anthology/J16-3001.
2018
38
Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Long Papers), pages 419–428 Melbourne, Australia, July 15 - 20, 2018. c⃝2018 Association for Computational Linguistics 419 Using Intermediate Representations to Solve Math Word Problems Danqing Huang1∗, Jin-Ge Yao2, Chin-Yew Lin2, Qingyu Zhou3, and Jian Yin1 {huangdq2@mail2,issjyin@mail}.sysu.edu.cn {Jinge.Yao,cyl}@microsoft.com [email protected] 1 The School of Data and Computer Science, Sun Yat-sen University. Guangdong Key Laboratory of Big Data Analysis and Processing, Guangzhou, P.R.China 2Microsoft Research 3 Harbin Institute of Technology Abstract To solve math word problems, previous statistical approaches attempt at learning a direct mapping from a problem description to its corresponding equation system. However, such mappings do not include the information of a few higher-order operations that cannot be explicitly represented in equations but are required to solve the problem. The gap between natural language and equations makes it difficult for a learned model to generalize from limited data. In this work we present an intermediate meaning representation scheme that tries to reduce this gap. We use a sequence-to-sequence model with a novel attention regularization term to generate the intermediate forms, then execute them to obtain the final answers. Since the intermediate forms are latent, we propose an iterative labeling framework for learning by leveraging supervision signals from both equations and answers. Our experiments show using intermediate forms outperforms directly predicting equations. 1 Introduction There is a growing interest in math word problem solving (Kushman et al., 2014; Koncel-Kedziorski et al., 2015; Huang et al., 2017; Roy and Roth, 2018). It requires reasoning with respect to sets of numbers or variables, which is an essential capability in many other natural language understanding tasks. Consider the math problems shown in Table 1. To solve the problems, one needs to know how many numbers to be summed up (e.g. “2 numbers/3 numbers”), and the relation between ∗Work done while this author was an intern at Microsoft Research. 1) The sum of 2 numbers is 18. The first number is 4 more than the second number. Find the two numbers. Equations: x + y = 18, x = y + 4 2) The sum of 3 numbers is 15. The larger number is 4 times the smallest and the middle number is 5. What are the numbers? Equations: x + y + z = 15, x = 4 ∗z, y = 5 Table 1: Math word problems. Equations have lost the information of count, max, ordinal operations. variables (“the first/second number”). However, an equation system does not encode these information explicitly. For example, an equation represents “the sum of 2 numbers” as (x + y) and “the sum of 3 numbers” as (x + y + z). This makes it difficult to generalize to cases unseen from data (e.g. “the sum of 100 numbers”). This paper presents a new intermediate meaning representation scheme for solving math problems, aiming at closing the semantic gap between natural language and equations. To generate the intermediate forms, we adapt a sequence-to-sequence (seq2seq) network following recent work that tries to generate equations from problem descriptions for this task. Wang et al. (2017) have shown that seq2seq models have the power to generate equations of which problem types do not exist in training data. In this paper, we propose a new method which adds an extra meaning representation and generate an intermediate form as output. Additionally, we observe that the attention weights of the seq2seq model repetitively concentrates on numbers in the problem description. To address the issue, we further propose to use a form of attention regularization. To train the model without explicit annotations of intermediate forms, we propose an iterative la420 beling framework to leverage signals from both equations and their solutions. We first derive possible intermediate forms with ambiguity using the gold-standard equation systems, and use these forms for training to get a pre-trained model. Then we iteratively refine the intermediate forms using the learned model and the signals from the goldstandard answers. We conduct experiments on two publicly available math problem datasets. Our experimental results show that using the intermediate forms for training performs significantly better than directly mapping problems to equation systems. Furthermore, our iterative labeling framework creates better labeled data with intermediate forms for training, which leads to improved performance. To summarize, our contributions include: • We present a new intermediate meaning representation scheme for solving math problems. • We design an iterative labeling framework to automatically augment training data with intermediate meaning representation. • We propose using attention regularization in training to address the issue of incorrect attention in the seq2seq model. • We verify the effectiveness of our proposed solutions by conducting experiments and analysis on real-world datasets. 2 Meaning Representation In this section, we will compare meaning representations for solving math problems and introduce the proposed intermediate meaning representation. 2.1 Meaning Representations for Math Problem Solving We first discuss two meaning representation schemes for math problem solving. An equation system is a collection of one or more equations involving the same set of variables, which should be considered as highly abstractive symbolic representation. The Dolphin Language is introduced by Shi et al. (2015). It contains about 35 math-related classes and over 200 math-related functions, with additional classes and functions automatically mined from Freebase. Unfortunately, these representation schemes do not generalize well. Consider the two problems listed in Table 2. They belong to the same type of problems asking about the summation of consecutive integers. However, their meaning representations are very different in the Dolphin language and in equations. On one hand, the Dolphin language aligns too closely with natural utterances. Since the math problem descriptions are diverse in using various nouns and verbs, Dolphin language may represent the same type of problems differently. On the other hand, an equation system does not explicitly represent useful problem solving information such as “number of variables” and “numbers are consecutive” 2.2 Intermediate Meaning Representation To bridge the semantic gap between the two meaning representations, we present a new intermediate meaning representation scheme for math problem solving. It consists of 6 classes and 23 functions. Here a class is the set of entities with the same semantic properties and can be inherited (e.g. 2 ∈int, int ⊑num). A function is comprised of a name, a list of arguments with corresponding types, and a return type. For example, there are two overloaded definitions for the function math#sum (Table 3). These forms can be constructed by recursively applying joint operations on functions with class type constraints. Our representation scheme attempts to borrow the explicit use of higher-order functions from the Dolphin language, while avoiding to be too specific. Meanwhile, the intermediate forms are not as concise as the equation systems (Table 2). We leave more detailed definitions to the supplement material due to space limit. 3 Problem Statement Given a math word problem p, our goal is to predict its answer Ap. For each problem we have annotations of both the equation system Ep and the answer Ap available for training. The latent intermediate form will be denoted as LFp. We formulate math problem solving as a sequence prediction task, taking the sequence of words in a math problem as input and generating a sequence of tokens in its corresponding intermediate form as output. We then execute the intermediate form to obtain the final answer. We evaluate the task using answer accuracy on two publicly 421 Problem 1: Find three consecutive integers with a sum of 267. Dolphin Language: vf.find(cat(‘integers’), count:3, adj.consecutive, (math#sum(pron.that, 267, det.a))) Equation: x + (x + 1) + (x + 2) = 267 This work: math#consecutive(3), math#sum(cnt: 3) = 267 Problem 2: What are 5 consecutive numbers total 95? Dolphin Language: wh.vf.math.total((cat(‘numbers’), count:5, pron.what, adj.consecutive), 95) Equation: x + (x + 1) + (x + 2) + (x + 3) + (x + 4) = 95 This work: math#consecutive(5), math#sum(cnt: 5) = 95 Table 2: Different representations for math problems. Dolphin language is detailed (’all words’). Equation system is coarse that it represents many functions implicitly, such as “count”, “consecutive”. Classes int, float, num, unk, var, list Functions ret:int count($1:list): number of variables in $1 ret:var max($1:list): variable of max value in $1 ret:var math#product($1,$2:var): $1 times $2 ret:var math#sum($1:list): sum of variables in $1 ret:var math#sum(cnt:$1:int): sum of $1 unks Example Four times the sum of three and a number is 10. -> math#product(4, math#sum(3, m))=10 Table 3: Examples of classes and functions in our intermediate representation. “ret” stands for return type. $1, $2 are arguments with its types. available math word problem datasets1: • Number Word Problem (NumWord) is created by Shi et al. (2015). It contains 1,878 number word problems (verbally expressed number problems, such as the examples in Table 1). Its linear subset (subset of problems that can be solved by linear equation systems) has 986 problems, only involving four basic operations {+, −, ∗, /}. • Dolphin18K is created by Huang et al. (2016). It contains 18,711 math word problems collected from Yahoo! Answers2. Since it contains some problems without equations, we only use the subset of 10,644 problems which are paired with their equation systems. 1Other small datasets with 4 basic operations {+, −, ∗, /} and only one unknown variable are considered as subsets of our datasets. 2https://answers.yahoo.com/ 4 Model In this section, we describe (1) the basic sequenceto-sequence model, and (2) attention regularization. 4.1 Sequence-to-Sequence RNN Model Our baseline model is based on sequence-tosequence learning (Sutskever et al., 2014) with attention (Bahdanau et al., 2015) and copy mechanism (Gulcehre et al., 2016; Gu et al., 2016). Encoder: The encoder is implemented as a singlelayer bidirectional RNN with gated recurrent units (GRUs). It reads words one-by-one from the input problem, producing a sequence of hidden states hi = [hF i , hB i ] with: hF i = GRU(φin(xi), hF i−1), (1) hB i = GRU(φin(xi), hB i+1), (2) where φin maps each input word xi to a fixeddimensional vector. Decoder with Copying: At each decoding step j, the decoder receives the word embedding of the previous word, and an attention function is applied to attend over the input words as follows: eji = vT tanh(Whhi + Wssj + battn), (3) aji = exp(eji) Pm i′=1 exp(eji′), (4) cj = m X i=1 ajihi, (5) where sj is the decoder hidden state. Intuitively, aji defines the probability distribution of attention over the input words. They are computed from the unnormalized attention scores eji. cj is the context vector, which is the weighted sum of the encoder hidden states. 422 At each step, the model has to decide whether to generate a word from target vocabulary or to copy a number from the problem description. The generation probability pgen is modeled by: pgen = σ(wT c cj + wT s sj + bptr), (6) where wc, ws and bptr are model parameters. Next, pgen is used as a soft switch: with probability pgen the model decides to generate from the decoder state. The probability distribution over all words in the vocabulary is: PRNN = softmax(W[sj, cj] + b); (7) with probability 1 −pgen the model decides to directly copy an input word according to its attention weight. This leads to the final distribution of decoder state outputs: P(wj = w|·) = pgenPRNN(w) + (1 −pgen)aji (8) 4.2 Attention Regularization In preliminary experiments, we observed that the attention weights in the baseline model repetitively concentrate on the numbers in the math problem description (will be discussed in later sections with Figure 1(a)). To address this issue, we regularize the accumulative attention weights for each input token using a rectified linear unit (ReLU) layer, leading to the regularization term: AttReg = X i ReLU( T X j=0 aji −1), (9) where ReLU(x) = max(x, 0). This term penalizes the accumulated attention weights on specific locations if it exceeds 1. Adding this term to the primary loss to get the final objective function: Loss = − X i log p(yi|xi; θ) + λ ∗AttReg (10) where λ is a hyper-parameter that controls the contribution of attention regularization in the loss. The format of our attention regularization term resembles the coverage mechanism used in neural machine translation (Tu et al., 2016; Cohn et al., 2016), which encourages the coverage or fertility control for input tokens. 5 Iterative Labeling Since explicit annotations of our intermediate forms do not exist, we propose an iterative labeling framework for training. 5.1 Deriving Latent Forms From Equations We use the annotated equation systems to derive possible latent forms. First we define some simple rules that map an expression to our intermediate form. For example, we use regular expressions to match numbers and unknown variables. Example rules are shown in Table 4 (see Section 2 of the Supplement Material for all rules). Regex/Rules Class/Function \-?[0-9\.]+ num [a-z] unk <num>|<unk> var (<var>\+)+<var> math#sum($1:list) (<unk>\+)+<unk> math#sum $1=count of unk (cnt:$1:int) Table 4: Example rules for deriving latent forms from equation system. 5.2 Ambiguity in Derivation For one equation system, several latent form derivations are possible. Take the following math problem as an example: Find 3 consecutive integers that 3 times the sum of the first and the third is 79. Given the annotation of its equation 3 ∗(x + (x + 2)) = 79, there are two possible latent intermediate forms: 1) math#consecutive(3), math#product(3, math#sum(ordinal(1), ordinal(3)))=79 2) math#consecutive(3), math#product(3, math#sum(min(), max()))=79 There exist two types of ambiguities: a) operator ambiguity. (x + 2) may correspond to the operator “ordinal(3)” or “max()”; b) alignment ambiguity. For each “3” in the intermediate form, it is unclear which “3” in the input to be copied. Therefore, we may derive multiple intermediate forms with spurious ones for a training problem. We can see from Table 5 that both datasets we used have the issue of ambiguity, containing about 20% of problems with operator ambiguity and 10% of problems with alignment ambiguity. 5.3 Iterative Labeling To address the issue of ambiguity, we perform an iterative procedure where we search for correct intermediate forms to refine the training data. The 423 Dataset Ambiguous Ambig. #LF oper align (per prob) NumWord 28.0% 10.2% 3.67 (Linear) NumWord 26.9% 9.5% 4.29 (All) Dolphin18K 35.9% 9.6% 3.86 Table 5: Statistics of latent forms on two datasets. The percentage of problems with operator and alignment ambiguity is shown in the 2nd and 3rd columns respectively. We also show the average number of intermediate forms of problems with derivation ambiguity in the rightmost column. intuition is that a better model will lead to more correct latent form outputs, and more correct latent forms in training data will lead to a better model. Algorithm 1 Iterative Labeling Require: (1) Tuples of (math problem description, equation system, answer) Dn = {(pi, Epi, Api)} (2) Possible latent forms PLF = {(p0, LF 1 p0), (p0, LF 2 p0), ..., (pn, LF m pn)} (3) Beam size B (4) training iterations Niter, pre-training iterations Npre Procedure: for iter = 1 to Niter do if iter < Npre then θ ←MLE with PLF else for (p, LF) in PLF do C = Decode B latent forms given p for j in 1...B do if Ans(Cj) is correct then LF ⇐Cj break θ ←MLE with relabeled PLF Algorithm 1 describes our training procedure. As pre-training, we first update our model by maximum likelihood estimation (MLE) with all possible latent forms for Npre iterations. Ambiguous and wrong latent forms may appear at this stage. This pre-training is to ensure faster convergence and a more stable model. After Npre iterations, iterative labeling starts. We decode on each training instance with beam search. We declare Cj to be the consistent form in the beam if it can be executed to yield the correct answer. Therefore we can relabel the latent form LF with Cj for problem p and use the new pairs for training. If there is no consistent form in the beam, we keep it unchanged. With iterative labeling, we update our model by MLE with relabeled latent forms. There are two conditions of Npre to consider: (1) Npre = 0, the training starts iterative labeling without pre-training. (2) Npre = Niter, the training is pure MLE without iterative labeling. 6 Experiments In this section, we compare our method against several strong baseline systems. 6.1 Experiment Setting Following previous work, experiments are done in 5-fold cross validation: in each run, 20% is used for testing, 70% for training and 10% for validation. Representation To make the task easier with less auxiliary nuisances (e.g. bracket pairs), we represent the intermediate forms in Polish notation. 3 Implementation details The dimension of encoder hidden state, decoder hidden state and embeddings are 100 in NumWord, 512 in Dolphin18K. All model parameters are initialized randomly with Gaussian distribution. The hyperparameter λ for the weight of attention regularization is set to 1.0 on NumWord and 0.4 on Dolphin18K. We use SGD optimizer with decaying learning rate initialized as 0.5. Dropout rate is set to 0.5. The stopping criterion for training is validation accuracy with the maximum number of iterations no more than 150. The vocabulary consists of words observed no less than N times in training set. We set N = 1 for NumWord and N = 5 for Dolphin18K. The beam size is set to 20 in the decoding stage. For iterative training, we first train a model for Npre = 50 iterations for pre-training. We tune the hyper-parameters on a separate dev set. We consider the following models for comparisons: • Wang et al. (2017): a seq2seq model with attention mechanism. As preprocessing, it replaces numbers in the math problem with tokens {n1, n2, ...}. It generates equation 3https://en.wikipedia.org/wiki/Polish_ notation 424 as output and recovers {n1, n2, ...} to corresponding numbers in the post-processing. • Seq2Seq Equ: we implement a seq2seq model with attention and copy mechanism. Different from Wang et al. (2017), it has the ability to copy numbers from problem description. • Shi et al. (2015): a rule-based system. It parses math problems into Dolphin language trees with predefined grammars and reasons across trees to get the equations with rules. We report numbers from their paper as the Dolphin language is not publicly available. • Huang et al. (2017): the current state-of-theart model on Dolphin18K. It is a featurebased model. It generates candidate equations and find the most probable equation by ranking with predefined features. 6.2 Results Overall results are shown in Table 6. From the table, we can see that our final model (Seq2Seq LF+AttReg+Iter) outperforms the neural-based baseline models (Wang et al. (2017)4 and Seq2Seq Equ). On Number word problem dataset, our model already outperforms the state-of-the-art feature-based model (Huang et al., 2017) by 40.8% and is comparable to the ruled-based model (Shi et al., 2015)5. Advantage of intermediate forms: From the first two rows, we can see that the seq2seq model which is trained to generate intermediate forms (Seq2Seq LF) greatly outperforms the same model trained to generate equations (Seq2Seq Equ). The use of intermediate forms helps more on NumWord than on Dolphin18K. This result is expected as the Dolphin18K dataset is more challenging, containing many other types of difficulties discussed in Section 6.3. Effect of Attention Regularization: Attention regularization improves the seq2seq model on the two datasets as expected. Figure 1 shows an example. The attention regularization does meet the expectation: the alignments in Fig 1(b) are less concentrated on the numbers in the input and more importantly and alignments are more reasonable. For example, when generating “math#product” in 4We re-implement this since it is not publicly available. 5The system reports precision and recall. Since all the problems have answers, its recall equals to our accuracy. the output, the attention is now correctly focused on the input token “times”. Effect of Iterative Labeling: We can see from Table 6 that iterative labeling clearly contributes to the accuracy increase on the two datasets. Now we compare the performance with and without pretraining in Table 7. When Npre = 0 in Algorithm 1, the model starts iterative labeling from the first iteration without pre-training. We find that training with pre-training is substantially better, as the model without pre-training can be unstable and may generate misleading spurious candidate forms. Next, we compare the performance with pure MLE training on NumWord (Linear) in Figure 2. The difference is that after 50 iterations of MLE training, iterative labeling would refine the latent forms of training data. In pure MLE training, the accuracy converges after 130 iterations. By using iterative labeling, the model achieves the accuracy of 61.6% at 110th iterations, which is faster to converge and leads to better performance. Furthermore, to check whether iterative labeling actually resolves ambiguities in the intermediate forms of the training data, we manually sample 100 math problems with derivation ambiguity. 78% of them are relabeled with correct latent forms as we have checked. From Table 8, we can see the latent form of one training problem is iteratively refined to the correct one. 6.3 Model Comparisons To explore the generalization ability of the neural approach and better guide our future work, we compare the problems solved by our neural-based model with the rule-based model (Shi et al., 2015) and the feature-based model (Huang et al., 2017). Neural-based v. Rule-based: On NumWord (ALL), 41.6% of problems can be solved by both models. 15.5% can only be solved by our neural model, while the rule-based model generates an empty or a wrong semantic tree due to the limitations of the predefined grammar. The neural model is more consistent with flexible word order and insertion of lexical items (e.g. rule-based model cannot handle the extra word ‘whole’ in “Find two consecutive whole numbers”). Neural-based v. Feature-based: On Dolphin18K, 9.2% of problems can be solved by both models. 7.6% can only be solved by our neural model, which indicates that the neural model 425 Models NumWord NumWord Dolphin18K (Linear) (ALL) (Linear) Wang et al. (2017) 19.7% 14.6% 10.2% Seq2Seq Equ 26.8% 20.1% 13.1% Seq2Seq LF 50.8% 45.2% 13.9% Seq2Seq LF+AttReg 56.7% 54.0% 15.1% Seq2Seq LF+AttReg+Iter 61.6% 57.1% 16.8% Shi et al. (2015) 63.6% 60.2% n/a Huang et al. (2017) 20.8% n/a 28.4% Table 6: Performances on two datasets. “LF” means that the model generates latent intermediate forms instead of equation systems. “AttReg” means attention regularization. “Iter” means iterative labeling. “n/a” means that the model does not run on the dataset. (a) seq2seq LF (b) seq2seq LF+AttReg Figure 1: Example alignments for one problem (darker color represents higher attention score). NumWord NumWord Dolphin18K (Linear) (ALL) (Linear) -pre 58.1% 54.9% 14.9% +pre 61.6% 57.1% 16.8% Table 7: Performance with and without pretraining in iterative labeling. 50 100 150 0.45 0.5 0.55 0.6 0.65 number of iterations accuracy MLE iterative labeling Figure 2: Accuracy with different iterations of training on NumWord (Linear). can capture novel features that the feature-based model is missing. While our neural model is complementary to the above mentioned models, we observe two main types of errors (more examples are shown in the supplementary material): 1. Natural language variations: Same type of problems can be described in different scenarios. The two problems: (1) “What is 10 minus 2?” and (2) “John has 10 apples. How many apples does John have after giving Mary 2 apples”, lead to the same equation x = 10 −2 but with very different descriptions. With limited size of data, we could not be expected to cover all possible ways to ask the same underlining math problems. Although the feature-based model has considered this with some features (e.g. POS Tag), the challenge is not well-addressed. 2. Nested operations: Some problems require multiple nested operations (e.g. “I think of a number, double it, add 3, multiply the answer by 3 and then add on the original number”). The rule-based model performs more consistently on this. 426 Training Problem: Find 2 0 consecutive integers which the first number is 2 1 more than 2 2 times the second number. Intermediate form in 1st iteration () math#consecutive(2 0), ordinal(1) = math#sum(“2 0”, math#product(“2 0”, “max()”) Intermediate form in 51st iteration () math#consecutive(2 0), ordinal(1) = math#sum(2 1, math#product(“2 0”, ordinal(2)) Intermediate form in 101st iteration () math#consecutive(2 0), ordinal(1) = math#sum(2 1, math#product(2 2, ordinal(2)) Table 8: Instance check of intermediate form for one math problem in several training iterations. 2 0 means the the first ‘2’ in the input and so on. Tokens with quote marks mean that they are incorrect. 7 Related Work Our work is related to two research areas: math word problem solving and semantic parsing. 7.1 Math Word Problem Solving There are two major components in this task: (1) meaning representation; (2) learning framework. Semantic Representation With the annotation of equation system, most approaches attempt at learning a direct mapping from math problem description to an equation system. There are other approaches considering an intermediate representation that bridges the semantic gap between natural language and equation system. Bakman (2007) defines a table of schema (e.g. TransferIn-Place, Transfer-In-Ownership) with associated formulas in natural utterance. A math problem can be mapped into a list of schema instantiations, then converted to equations. Liguda and Pfeiffer (2012) use augmented semantic network to represent math problems, where nodes represent concepts of quantities and edges represent transition states. Shi et al. (2015) design a new meaning representation language called Dolphin Language (DOL) with over 200 math-related functions and more additional noun functions. With predefined rules, these approaches accept limited well-format input sentences. Inspired by these representations, our work describes a new formal language which is more compact and is effective in facilitating better machine learning performance. Learning Framework In rule-based approaches (Bakman, 2007; Liguda and Pfeiffer, 2012; Shi et al., 2015), they map math problem description into structures with predefined grammars and rules. Feature-based approaches contain two stages: (1) generate equation candidates; They either replace numbers of existing equations in the training data as new equations (Kushman et al., 2014; Zhou et al., 2015; Upadhyay et al., 2016), or enumerate possible combinations of math operators and numbers and variables (Koncel-Kedziorski et al., 2015), which leads to intractably huge search space. (2) predict equation with features. For example, Hosseini et al. (2014) design features to classify verbs to addition or subtraction. Roy and Roth (2015); Roy et al. (2016) leverage the tree structure of equations. Mitra and Baral (2016); Roy and Roth (2018) design features for a few math concepts (e.g. Part-Whole, Comparison). Roy and Roth (2017) focus on the dependencies between number units. These approaches requires manual feature design and the features may be difficult to be generalized to other tasks. Recently, there are a few works trying to build an end-to-end system with neural models. Ling et al. (2017) consider multiple-choice math problems and use a seq2seq model to generate rationale and the final choice (i.e. A, B, C, D). Wang et al. (2017) apply a seq2seq model to generate equations with the constraint of single unknown variable. Similarly, we use the seq2seq model but with novel attention regularization to address incorrect attention weights in the seq2seq model. 7.2 Semantic Parsing Our work is also related to the classic settings of learning executable semantic parsers from indirect supervision (Clarke et al., 2010; Liang et al., 2011; Artzi and Zettlemoyer, 2011, 2013; Berant et al., 2013; Pasupat and Liang, 2016). Maximum marginal likelihood with beam search (Kwiatkowski et al., 2013; Pasupat and Liang, 2016; Ling et al., 2017) is traditionally used. It maximizes the marginal likelihood of all consistent logical forms being observed. Recently 427 reinforcement learning (Guu et al., 2017; Liang et al., 2017) has also been considered, which maximizes the expected reward over all possible logical forms. Different from them, we only consider one single consistent latent form per training instance by leveraging training signals from both the answer and the equation system, which should be more efficient for our task. 8 Conclusion This paper presents an intermediate meaning representation scheme for math problem solving that bridges the semantic gap between natural language and equation systems. To generate intermediate forms, we propose a seq2seq model with novel attention regularization. Without explicit annotations of latent forms, we design an iterative labeling framework for training. Experimental result shows that using intermediate forms is more effective than directly using equations. Furthermore, our iterative labeling effectively resolves ambiguities and leads to better performances. As shown in the error analysis, same types of problems can have different natural language expressions. In the future, we will focus on tackling this challenge. In addition, we plan to expand the coverage of our meaning representation to support more mathematic concepts. Acknowledgments This work is supported by the National Natural Science Foundation of China (61472453, U1401256, U1501252, U1611264,U1711261,U1711262). Thanks to the anonymous reviewers for their helpful comments and suggestions. References Yoav Artzi and Luke Zettlemoyer. 2011. Bootstrapping semantic parsers from conversations. In Proceedings of the 2011 Conference on Empirical Methods in Natural Language Processing. Yoav Artzi and Luke Zettlemoyer. 2013. Weakly supervised learning of semantic parsers for mapping instructions to actions. In Transactions of the Association for Computational Linguistics. Dzmitry Bahdanau, Kyunghyun Cho, and Yoshua Bengio. 2015. Neural machine translation by jointly learning to align and translate. In International Conferene on Learning Representation. Yefim Bakman. 2007. Robust understanding of word problems with extraneous information. Http://arxiv.org/abs/math/0701393. Jonathan Berant, Andrew Chou, Roy Frostig, and Percy Liang. 2013. Semantic parsing on freebase from question-answer pairs. In Proceedings of the 2013 Conference on Empirical Methods in Natural Language Processing. James Clarke, Dan Goldwasser, Ming-Wei Chang, and Dan Roth. 2010. Driving semantic parsing from the worlds response. In Proceedings of the Fourteenth Conference on Computational Natural Language Learning. Trevor Cohn, Cong Duy Vu Hoang, Ekaterina Vymolova, Kaisheng Yao, Chris Dyer, and Gholamreza Haffari. 2016. Incorporating structural alignment biases into an attentional neural translation model. In Proceedings of the 2016 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies. Jiatao Gu, Zhengdong Lu, Hang Li, and Victor O.K. Li. 2016. Incorporating copying mechanism in sequence-to-sequence learning. In Proceedings of the 52nd Annual Meeting of the Association for Computational Linguistics. Caglar Gulcehre, Sungjin Ahn, Ramesh Nallapati, Bowen Zhou, and Yoshua Bengio. 2016. Pointing the unknown words. In Proceedings of the 52nd Annual Meeting of the Association for Computational Linguistics. Kelvin Guu, Panupong Pasupat, Evan Zheran Liu, and Percy Liang. 2017. From language to programs: Bridging reinforcement learning and maximum marginal likelihood. In Proceedings of the 53rd Annual Meeting of the Association for Computational Linguistics. Mohammad Javad Hosseini, Hannaneh Hajishirzi, Oren Etzioni, and Nate Kushman. 2014. Learning to solve arithmetic word problems with verb categorization. In Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing. Danqing Huang, Shuming Shi, Chin-Yew Lin, and Jian Yin. 2017. Learning fine-grained expressions to solve math word problems. In Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing. Danqing Huang, Shuming Shi, Chin-Yew Lin, Jian Yin, and Wei-Ying Ma. 2016. How well do computers solve math word problems? large-scale dataset construction and evaluation. In Proceedings of the 52nd Annual Meeting of the Association for Computational Linguistics. Rik Koncel-Kedziorski, Hannaneh Hajishirzi, Ashish Sabharwal, Oren Etzioni, and Siena Dumas Ang. 428 2015. Parsing algebraic word problems into equations. Transactions of the Association for Computational Linguistics, 3:585–597. Nate Kushman, Yoav Artzi, Luke Zettlemoyer, and Regina Barzilay. 2014. Learning to automatically solve algebra word problems. In Proceedings of the 52nd Annual Meeting of the Association for Computational Linguistics. Tom Kwiatkowski, Eunsol Choi, Yoav Artzi, and Luku Zettlemoyer. 2013. Scaling semantic parsers with on-the-fly ontology matching. In Proceedings of the 2013 Conference on Empirical Methods in Natural Language Processing. Chen Liang, Jonathan Berant, Quoc Le, Kennet D.Forbus, and Ni Lao. 2017. Neural symbolic machines: Learning semantic parsers on freebase with weak supervision. In Proceedings of the 53rd Annual Meeting of the Association for Computational Linguistics. Percy Liang, Michael Jordan, and Dan Klein. 2011. Learning dependency-based compositional semantics. In Proceedings of the 49th Annual Meeting of the Association for Computational Linguistics: Human Language Technologies. Christian Liguda and Thies Pfeiffer. 2012. Modeling math word problems with augmented semantic networks. In Natural Language Processing and Information Systems. International Conference on Applications of Natural Language to Information Systems (NLDB-2012), pages 247–252. Wang Ling, Dani Yogatama, Chris Dyer, and Phil Blunsom. 2017. Program induction by rationale generation: Learning to solve and explain algebraic word problems. In Proceedings of the 53rd Annual Meeting of the Association for Computational Linguistics. Arindam Mitra and Chitta Baral. 2016. Learning to use formulas to solve simple arithmetic problems. In Proceedings of the 52nd Annual Meeting of the Association for Computational Linguistics. Panupong Pasupat and Percy Liang. 2016. Inferring logical forms from denotations. In Proceedings of the 52nd Annual Meeting of the Association for Computational Linguistics. Subhro Roy and Dan Roth. 2017. Unit dependency graph and its application to arithmetic word problem solving. In Proceedings of the 2017 Conference on Association for the Advancement of Artificial Intelligence. Subhro Roy and Dan Roth. 2018. Mapping to declarative knowledge for word problem solving. In Transactions of the Association for Computational Linguistic. Subhro Roy and Subhro Roth. 2015. Solving general arithmetic word problems. In Proceedings of the 2015 Conference on Empirical Methods in Natural Language Processing, pages 1743–1752. The Association for Computational Linguistics. Subhro Roy, Shyam Upadhyay, and Dan Roth. 2016. Equation parsing: Mapping sentences to grounded equations. In Proceedings of the 2016 Conference on Empirical Methods in Natural Language Processing. Shuming Shi, Wang Yuehui, Chin-Yew Lin, Xiaojiang Liu, and Yong Rui. 2015. Automatically solving number word problems by semantic parsing and reasoning. In Proceedings of the 2015 Conference on Empirical Methods in Natural Language Processing. Ilya Sutskever, Oriol Vinyals, and Quoc Le. 2014. Sequence to sequence learning with neural networks. In Advances in Neural Information Processing Systems 27. Zhaopeng Tu, Zhengdong Lu, Yang Liu, Xiaohua Liu, and Hang Li. 2016. Modeling coverage for neural machine translation. In Proceedings of the 52nd Annual Meeting of the Association for Computational Linguistics. Shyam Upadhyay, Ming-Wei Chang, Kai-Wei Chang, and Wen tau Yih. 2016. Learning from explicit and implicit supervision jointly for algebra word problems. In Proceedings of the 2016 Conference on Empirical Methods in Natural Language Processing. Yan Wang, Xiaojiang Liu, and Shuming Shi. 2017. Deep neural model for math word problem problems. In Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing. Lipu Zhou, Shuaixiang Dai, and Liwei Chen. 2015. Learn to solve algebra word problems using quadratic programming. In Proceedings of the 2015 Conference on Empirical Methods in Natural Language Processing.
2018
39
Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Long Papers), pages 34–45 Melbourne, Australia, July 15 - 20, 2018. c⃝2018 Association for Computational Linguistics 34 Explicit Retrofitting of Distributional Word Vectors Goran Glavaˇs Data and Web Science Group University of Mannheim B6, 29, DE-68161 Mannheim [email protected] Ivan Vuli´c Language Technology Lab University of Cambridge 9 West Road, Cambridge CB3 9DA [email protected] Abstract Semantic specialization of distributional word vectors, referred to as retrofitting, is a process of fine-tuning word vectors using external lexical knowledge in order to better embed some semantic relation. Existing retrofitting models integrate linguistic constraints directly into learning objectives and, consequently, specialize only the vectors of words from the constraints. In this work, in contrast, we transform external lexico-semantic relations into training examples which we use to learn an explicit retrofitting model (ER). The ER model allows us to learn a global specialization function and specialize the vectors of words unobserved in the training data as well. We report large gains over original distributional vector spaces in (1) intrinsic word similarity evaluation and on (2) two downstream tasks – lexical simplification and dialog state tracking. Finally, we also successfully specialize vector spaces of new languages (i.e., unseen in the training data) by coupling ER with shared multilingual distributional vector spaces. 1 Introduction Algebraic modeling of word vector spaces is one of the core research areas in modern Natural Language Processing (NLP) and its usefulness has been shown across a wide variety of NLP tasks (Collobert et al., 2011; Chen and Manning, 2014; Melamud et al., 2016). Commonly employed distributional models for word vector induction are based on the distributional hypothesis (Harris, 1954), i.e., they rely on word co-occurrences obtained from large text corpora (Mikolov et al., 2013b; Pennington et al., 2014; Levy and Goldberg, 2014a; Levy et al., 2015; Bojanowski et al., 2017). The dependence on purely distributional knowledge results in a well-known tendency of fusing semantic similarity with other types of semantic relatedness (Hill et al., 2015; Schwartz et al., 2015) in the induced vector spaces. Consequently, the similarity between distributional vectors indicates just an abstract semantic association and not a precise semantic relation (Yih et al., 2012; Mohammad et al., 2013). For example, it is difficult to discern synonyms from antonyms in distributional spaces. This property has a particularly negative effect on NLP applications like text simplification and statistical dialog modeling, in which discerning semantic similarity from other types of semantic relatedness is pivotal to the system performance (Glavaˇs and ˇStajner, 2015; Faruqui et al., 2015; Mrkˇsi´c et al., 2016; Kim et al., 2016b). A standard solution is to move beyond purely unsupervised learning of word representations, in a process referred to as word vector space specialization or retrofitting. Specialization models leverage external lexical knowledge from lexical resources, such as WordNet (Fellbaum, 1998), the Paraphrase Database (Ganitkevitch et al., 2013), or BabelNet (Navigli and Ponzetto, 2012), to specialize distributional spaces for a particular lexical relation, e.g., synonymy (Faruqui et al., 2015; Mrkˇsi´c et al., 2017) or hypernymy (Glavaˇs and Ponzetto, 2017). External constraints are commonly pairs of words between which a particular relation holds. Existing specialization methods exploit the external linguistic constraints in two prominent ways: (1) joint specialization models modify the learning objective of the original distributional model by integrating the constraints into it (Yu and Dredze, 2014; Kiela et al., 2015; Nguyen et al., 2016, inter alia); (2) post-processing models fine-tune distributional vectors retroactively after training to satisfy the external constraints (Faruqui et al., 2015; 35 Mrkˇsi´c et al., 2017, inter alia). The latter, in general, outperform the former (Mrkˇsi´c et al., 2016). Retrofitting models can be applied to arbitrary distributional spaces but they suffer from a major limitation – they locally update only vectors of words present in the external constraints, whereas vectors of all other (unseen) words remain intact. In contrast, joint specialization models propagate the external signal to all words via the joint objective. In this paper, we propose a new approach for specializing word vectors that unifies the strengths of both prior strategies, while mitigating their limitations. Same as retrofitting models, our novel framework, termed explicit retrofitting (ER), is applicable to arbitrary distributional spaces. At the same time, the method learns an explicit global specialization function that can specialize vectors for all vocabulary words, similar as in joint models. Yet, unlike the joint models, ER does not require expensive re-training on large text corpora, but is directly applied on top of any pre-trained vector space. The key idea of ER is to directly learn a specialization function in a supervised setting, using lexical constraints as training instances. In other words, our model, implemented as a deep feedforward neural architecture, learns a (non-linear) function which “translates” word vectors from the distributional space into the specialized space. We show that the proposed ER approach yields considerable gains over distributional spaces in word similarity evaluation on standard benchmarks (Hill et al., 2015; Gerz et al., 2016), as well as in two downstream tasks – lexical simplification and dialog state tracking. Furthermore, we show that, by coupling the ER model with shared multilingual embedding spaces (Mikolov et al., 2013a; Smith et al., 2017), we can also specialize distributional spaces for languages unseen in the training data in a zero-shot language transfer setup. In other words, we show that an explicit retrofitting model trained with external constraints from one language can be successfully used to specialize the distributional space of another language. 2 Related Work The importance of vector space specialization for downstream tasks has been observed, inter alia, for dialog state tracking (Mrkˇsi´c et al., 2017; Vuli´c et al., 2017b), spoken language understanding (Kim et al., 2016b,a), judging lexical entailment (Nguyen et al., 2017; Glavaˇs and Ponzetto, 2017; Vuli´c and Mrkˇsi´c, 2017), lexical contrast modeling (Nguyen et al., 2016), and cross-lingual transfer of lexical resources (Vuli´c et al., 2017a). A common goal pertaining to all retrofitting models is to pull the vectors of similar words (e.g., synonyms) closer together, while some models also push the vectors of dissimilar words (e.g., antonyms) further apart. The specialization methods fall into two categories: (1) joint specialization methods, and (2) post-processing (i.e., retrofitting) methods. Methods from both categories make use of similar lexical resources – they typically leverage WordNet (Fellbaum, 1998), FrameNet (Baker et al., 1998), the Paraphrase Database (PPDB) (Ganitkevitch et al., 2013; Pavlick et al., 2015), morphological lexicons (Cotterell et al., 2016), or simple handcrafted linguistic rules (Vuli´c et al., 2017b). In what follows, we discuss the two model categories. Joint Specialization Models. These models integrate external constraints into the distributional training procedure of general word embedding algorithms such as CBOW, Skip-Gram (Mikolov et al., 2013b), or Canonical Correlation Analysis (Dhillon et al., 2015). They modify the prior or the regularization of the original objective (Yu and Dredze, 2014; Xu et al., 2014; Bian et al., 2014; Kiela et al., 2015) or integrate the constraints directly into the, e.g., an SGNS- or CBOW-style objective (Liu et al., 2015; Ono et al., 2015; Bollegala et al., 2016; Osborne et al., 2016; Nguyen et al., 2016, 2017). Besides generally displaying lower performance compared to retrofitting methods (Mrkˇsi´c et al., 2016), these models are also tied to the distributional objective and any change of the underlying distributional model induces a change of the entire joint model. This makes them less versatile than the retrofitting methods. Post-Processing Models. Models from the popularly termed retrofitting family inject lexical knowledge from external resources into arbitrary pretrained word vectors (Faruqui et al., 2015; Jauhar et al., 2015; Rothe and Sch¨utze, 2015; Wieting et al., 2015; Nguyen et al., 2016; Mrkˇsi´c et al., 2016). These models fine-tune the vectors of words present in the linguistic constraints to reflect the ground-truth lexical knowledge. While the large majority of specialization models from both classes operate only with similarity constraints, a line of recent work (Mrkˇsi´c et al., 2016; Mrkˇsi´c et al., 2017; Vuli´c et al., 2017b) demonstrates that knowledge about both similar and dissimilar words leads to 36 improved performance in downstream tasks. The main shortcoming of the existing retrofitting models is their inability to specialize vectors of words unseen in external lexical resources. Our explicit retrofitting framework brings together desirable properties of both model classes: (1) unlike joint models, it does not require adaptation to the underlying distributional model and expensive re-training, i.e., it is applicable to any pre-trained distributional space; (2) it allows for easy integration of both similarity and dissimilarity constraints into the specialization process; and (3) unlike post-processors, it specializes the full vocabulary of the original distributional space and not only vectors of words from external constraints. 3 Explicit Retrofitting Our explicit retrofitting (ER) approach, illustrated by Figure 1a, consists of two major components: (1) an algorithm for preparing training instances from external lexical constraints, and (2) a supervised specialization model, based on a deep feedforward neural network. This network, shown in Figure 1b learns a non-linear global specialization function from the training instances. 3.1 From Constraints to Training Instances Let X = {xi}N i=1, xi ∈Rd be the d-dimensional distributional vector space that we want to specialize (with V = {wi}N i=1 referring to the associated vocabulary) and let X′ = {x′i}N i=1 be the corresponding specialized vector space that we seek to obtain through explicit retrofitting. Let C = {(wi, wj, r)l}L l=1 be the set of L linguistic constraints from an external lexical resource, each consisting of a pair of vocabulary words wi and wj and a semantic relation r that holds between them. The most recent state-of-the-art retrofitting work (Mrkˇsi´c et al., 2017; Vuli´c et al., 2017b) suggests that using both similarity and dissimilarity constraints leads to better performance compared to using only similarity constraints. Therefore, we use synonymy and antonymy relations from external resources, i.e., rl ∈{ant, syn}. Let g be the function measuring the distance between words wi and wj based on their vector representations. The algorithm for preparing training instances from constraints is guided by the following assumptions: 1. All synonymy pairs (wi, wj, syn) should have a minimal possible distance score in the specialized space, i.e., g(x′i, x′j) = gmin;1 2. All antonymy pairs (wi, wj, ant) should have a maximal distance in the specialized space, i.e., g(x′i, x′j) = gmax;2 3. The distances g(x′i, x′k) in the specialized space between some word wi and all other words wk that are not synonyms or antonyms of wi should be in the interval (gmin, gmax). Our goal is to discern semantic similarity from semantic relatedness by comparing, in the specialized space, the distances between word pairs (wi, wj, r) ∈C with distances that words wi and wj from those pairs have with other vocabulary words wm. It is intuitive to enforce that the synonyms are as close as possible and antonyms as far as possible. However, we do not know what the distances between non-synonymous and nonantonymous words g(x′i, xm) in the specialized space should look like. This is why, for all other words, similar to (Faruqui et al., 2016; Mrkˇsi´c et al., 2017), we assume that the distances in the specialized space for all word pairs not found in C should stay the same as in the distributional space: g(x′i, x′m) = g(xi, xm). This way we preserve the useful semantic content available in the original distributional space. In downstream tasks most errors stem from vectors of semantically related words (e.g., car – driver) being as similar as vectors of semantically similar words (e.g., car – automobile). To anticipate this, we compare the distances of pairs (wi, wj, r) ∈C with the distances for pairs (wi, wm) and (wj, wn), where wm and wn are negative examples: the vocabulary words that are most similar to wi and wj, respectively, in the original distributional space X. Concretely, for each constraint (wi, wj, r) ∈C we retrieve (1) K vocabulary words {wk m}K k=1 that are closest in the input distributional space (according to the distance function g) to the word wi and (2) K vocabulary words {wk n}K k=1 that are closest to the word wj. We then create, for each constraint (wi, wj, r) ∈C, a corresponding set M (termed micro-batch) of 2K + 1 embedding pairs coupled with a corresponding distance in the input distributional space: 1The minimal distance value is gmin = 0 for, e.g., cosine distance or Euclidean distance. 2While some distance functions do have a theoretical maximum (e.g., gmax = 2 for cosine distance), others (e.g., Euclidean distance) may be theoretically unbounded. For unbounded distance measures, we propose using the maximal distance between any two words from the vocabulary as gmax. 37 External knowledge (bright, light, syn) (source, target, ant) (buy, acquire, syn) ... Distributional vector space acquire [0.11, -0.23, ...,1.11] bright [0.11, -0.23, ..., 1.11] buy [-0.41, 0.29, ..., -1.07] ... target [-1.7, 0.13, ..., -0.92] top [-0.21, -0.52, ..., 0.47] ... Training instances (micro-batches) micro-batch 1: original: vbright, vlight : 0.0 neg 1: Vbright, Vsunset : 0.35 neg 2: Vlight, Vbulb : 0.27 micro-batch 2: original: vsource, vtarget : 2.0 neg 1: Vsource, Vriver : 0.29 neg 2: Vtarget, Vbullet : 0.41 ... Specialization model (non-linear regression) ... ... ... ... ... ... ... ... ... ... g: distance function f: specialization function (a) Illustration of the explicit retrofitting approach External knowledge (bright, light, syn) (source, target, ant) (buy, acquire, syn) ... Distributional vector space acquire [0.11, -0.23, ...,1.11] bright [0.11, -0.23, ..., 1.11] buy [-0.41, 0.29, ..., -1.07] ... target [-1.7, 0.13, ..., -0.92] top [-0.21, -0.52, ..., 0.47] ... Training instances (micro-batches) micro-batch 1: original: vbright, vlight : 0.0 neg 1: Vbright, Vsunset : 0.35 neg 2: Vlight, Vbulb : 0.27 micro-batch 2: original: vsource, vtarget : 2.0 neg 1: Vsource, Vriver : 0.29 neg 2: Vtarget, Vbullet : 0.41 ... Specialization model (non-linear regression) ... ... ... ... ... ... ... ... ... ... g: distance function f: specialization function xj xi x’j=f(xj) x’i=f(xi) (b) Supervised specialization model Figure 1: (a) High-level illustration of the explicit retrofitting approach: lexical constraints, i.e., pairs of synonyms and antonyms, are transformed into respective micro-batches, which are then used to train the supervised specialization model. (b) The low-level implementation of the specialization model, combining the non-linear embedding specialization function f, defined as the deep fully-connected feed-forward network, with the distance metric g, measuring the distance between word vectors after their specialization. M (wi, wj, r) = {(xi, xj, gr)} ∪ {(xi, xk m, g(xi, xk m))}K k=1 ∪ {(xj, xk n, g(xj, xk n))}K k=1 (1) with gr = gmin if r = syn; gr = gmax if r = ant. 3.2 Non-Linear Specialization Function Our retrofitting framework learns a global explicit specialization function which, when applied on a distributional vector space, transforms it into a space that better captures semantic similarity, i.e., discerns similarity from all other types of semantic relatedness. We seek the optimal parameters θ of the parametrized function f(x; θ) : Rd →Rd (where d is the dimensionality of the input space). The specialized embedding x′i of the word wi is then obtained as x′i = f(xi; θ). The specialized space X′ is obtained by transforming distributional vectors of all vocabulary words, X′ = f(X; θ). We define the specialization function f to be a multi-layer fully-connected feed-forward network with H hidden layers and non-linear activations φ. The illustration of this network is given in Figure 1b. The i-th hidden layer is defined with a weight matrix Wi and a bias vector bi: hi(x; θi) = φ  hi−1(x; θi−1)Wi + bi (2) where θi is the subset of network’s parameters up to the i-th layer. Note that in this notation, x = h0(x; ∅) and x′ = f(x, θ) = hH(x; θ). Let dh be the size of the hidden layers. The network’s parameters are then as follows: W1 ∈Rd×dh; Wi ∈Rdh×dh, i ∈{2, . . . , H −1}; WH ∈ Rdh×d; bi ∈Rdh, i ∈{1, . . . , H −1}; bH ∈Rd. 3.3 Optimization Objectives We feed the micro-batches consisting of 2K + 1 training instances to the specialization model (see Section 3.1). Each training instance consists of a pair of distributional (i.e., unspecialized) embedding vectors xi and xj and a score g denoting the desired distance between the specialized vectors x′i and x′j of corresponding words wi and wj. Mean Square Distance Objective (ER-MSD). Let our training batch consist of N training instances, {(xi 1, xi 2, gi)}N i=1. The simplest objective function is then the difference between the desired and obtained distances of specialized vectors: JMSD = N X i=1  g(f(xi 1), f(xi 2)) −gi2 (3) By minimizing the MSD objective we simply force the specialization model to produce a specialized embedding space X′ in which distances between all synonyms amount to gmin, distances between all antonyms amount to gmax and distances between all other word pairs remain the same as in the original space. The MSD objective does not leverage negative examples: it only indirectly enforces that synonym (or antonym) pairs (wi, wj) have smaller (or larger) distances than corresponding non-constraint word pairs (wi, wk) and (wj, wk). Contrastive Objective (ER-CNT). An alternative to MSD is to directly contrast the distances of constraint pairs (i.e., antonyms and synonyms) 38 with the distances of their corresponding negative examples, i.e., the pairs from their respective microbatch (cf. Eq. (1) in Section 3.1). Such an objective should directly enforce that the similarity scores for synonyms (antonyms) (wi, wj) are larger (or smaller, for antonyms) than for pairs (wi, wk) and (wj, wk) involving the same words wi and wj, respectively. Let S and A be the sets of microbatches created from synonymy and antonymy constraints. Let Ms = {(xi 1, xi 2, gi)}2K+1 i=1 be one micro-batch created from one synonymy constraint and let Ma be the analogous micro-batch created from one antonymy constraint. Let us then assume that the first triple (i.e., for i = 1) in every microbatch corresponds to the constraint pair and the remaining 2K triples (i.e., for i ∈{2, . . . , 2K + 1}) to respective non-constraint word pairs. We then define the contrastive objective as follows: JCNT = X Ms∈S 2K+1 X i=2  (gi −gmin) −(g′i −g′1) 2 + X Ma∈A 2K+1 X i=2  (gmax −gi) −(g′1 −g′i) 2 where g′ is a short-hand notation for the distance between vectors in the specialized space, i.e., g′(x1, x2) = g(x′ 1, x′ 2) = g(f(x1), f(x2)). Topological Regularization. Because the distributional space X already contains useful semantic information, we want our specialized space X′ to move similar words closer together and dissimilar words further apart, but without disrupting the overall topology of X. To this end, we define an additional regularization objective that measures the distance between the original vectors x1 and x2 and their specialized counterparts x′ 1 = f(x1) and x′ 2 = f(x2), for all examples in the training set: JREG = N X i=1 g(xi 1, f(xi 1)) + g(xi 2, f(xi 2)) (4) We minimize the final objective function J′ = J + λJREG. J is either JMSD or JCNT and λ is the regularization factor which determines how strictly we retain the topology of the original space. 4 Experimental Setup Distributional Vectors. In order to estimate the robustness of the proposed explicit retrofitting procedure, we experiment with three different publicly available and widely used collections of pre-trained distributional vectors for English: (1) SGNS-W2 – vectors trained on the Wikipedia dump from the Polyglot project (Al-Rfou et al., 2013) using the Skip-Gram algorithm with Negative Sampling (SGNS) (Mikolov et al., 2013b) by Levy and Goldberg (2014b), using the context windows of size 2; (2) GLOVE-CC – vectors trained with the GloVe (Pennington et al., 2014) model on the Common Crawl; and (3) FASTTEXT – vectors trained on Wikipedia with a variant of SGNS that builds word vectors by summing the vectors of their constituent character n-grams (Bojanowski et al., 2017). Linguistic Constraints. We experiment with the sets of linguistic constraints used in prior work (Zhang et al., 2014; Ono et al., 2015). These constraints, extracted from WordNet (Fellbaum, 1998) and Roget’s Thesaurus (Kipfer, 2009), comprise a total of 1,023,082 synonymy word pairs and 380,873 antonymy word pairs. Although this seems like a large number of linguistic constraints, there is only 57,320 unique words in all synonymy and antonymy constraints combined, and not all of these words are found in the dictionary of the pre-trained distributional vector space. For example, only 15.3% of the words from constraints are found in the whole vocabulary of SGNS-W2 embeddings. Similarly, we find only 13.3% and 14.6% constraint words among the 200K most frequent words from the GLOVE-CC and FASTTEXT vocabularies, respectively. This low coverage emphasizes the core limitation of current retrofitting methods, being able to specialize only the vectors of words seen in the external constraints, and the need for our global ER method which can specialize all word vectors from the distributional space. ER Model Configuration. In all experiments, we set the distance function g to cosine distance: g(x1, x2) = 1−(x1 ·x2/(∥x1∥∥x2∥)) and use the hyperbolic tangent as activation, φ = tanh. For each constraint (wi, wj), we create K = 4 corresponding negative examples for both wi and wj, resulting in micro-batches with 2K + 1 = 9 training instances.3 We separate 10% of the created micro-batches as the validation set. We then tune the hyper-parameter values, the number of hidden layers H = 5 and their size dh = 1000, and the 3For K < 4 we observed significant performance drop. Setting K > 4 resulted in negligible performance gains but significantly increased the model training time. 39 topological regularization factor λ = 0.3 by minimizing the model’s objective J′ on the validation set. We train the model in mini-batches, each containing Nb = 100 constraints (i.e., 900 training instances, see above), using the Adam optimizer (Kingma and Ba, 2015) with initial learning rate set to 10−4. We use the loss on the validation set as the early stopping criteria. 5 Results and Discussion 5.1 Word Similarity Evaluation Setup. We first evaluate the quality of the explicitly retrofitted embedding spaces intrinsically, on two word similarity benchmarks: SimLex-999 dataset (Hill et al., 2015) and SimVerb3500 (Gerz et al., 2016), a recent dataset containing human similarity ratings for 3,500 verb pairs.4 We use Spearman’s ρ rank correlation between gold and predicted word pair scores as the evaluation metric. We evaluate the specialized embedding spaces in two settings. In the first setting, termed lexically disjoint, we remove from our training set all linguistic constraints that contain any of the words found in SimLex or SimVerb. This way, we effectively evaluate the model’s ability to generalize the specialization function to unseen words. In the second setting (lexical overlap) we retain the constraints containing SimLex or SimVerb words in the training set. For comparison, we also report performance of the state-of-the-art local retrofitting model ATTRACT-REPEL (Mrkˇsi´c et al., 2017), which is able to specialize only the words from the linguistic constraints. Results. The results with our ER model applied to three distributional spaces are shown in Table 1. The scores suggest that the proposed ER model is universally useful and robust. The ER-specialized spaces outperform original distributional spaces across the board, for both objective functions. The results in the lexically disjoint setting are especially indicative of the improvements achieved by the ER. For example, we achieve a correlation gain of 18% for the GLOVE-CC vectors on SimLex using a specialization function learned without seeing a single constraint with any SimLex word. 4Other word similarity datasets such as MEN (Bruni et al., 2014) or WordSim-353 (Finkelstein et al., 2002) conflate the concepts of true semantic similarity and semantic relatedness in a broader sense. In contrast, SimLex and SimVerb explicitly discern between the two, with pairs of semantically related but not similar words (e.g. car and wheel) having low ratings. In the lexical overlap setting, we observe substantial gains only for GLOVE-CC. The modest gains in this setting with FASTTEXT and SGNSW2 in fact strengthen the impression that the ER model learns a general specialization function, i.e., it does not “overfit” to words from linguistic constraints. The ER model with the contrastive objective (ER-CNT) yields better performance on average than the one using the simpler square distance objective (ER-MSD). This is expected, given that the contrastive objective enforces the model to distinguish pairs of semantically (dis)similar words from pairs of semantically related words. Finally, the post-processing ATTRACT-REPEL model based on local vector updates seems to substantially outperform the ER method in this task. The gap is especially visible for FASTTEXT and SGNS-W2 vectors. However, since ATTRACTREPEL specializes only words seen in linguistic constraints,5 its performance crucially depends on the coverage of test set words in the constraints. ATTRACT-REPEL excels on the intrinsic evaluation as the constraints cover 99.2% of SimLex words and 99.9% of SimVerb words. However, its usefulness is less pronounced in real-life downstream scenarios in which such high coverage cannot be guaranteed, as demonstrated in Section 5.3. Analysis. We examine in more detail the performance of the ER model with respect to (1) the type of constraints used for training the model: synonyms and antonyms, only synonyms, or only antonyms and (2) the extent to which we retain the topology of the original distributional space (i.e., with respect to the value of the topological regularization factor λ). All reported results were obtained by specializing the GLOVE-CC distributional space in the lexically disjoint setting (i.e., employed constraints did not contain any of the SimLex or SimVerb words). In Table 2 we show the specialization performance of the ER-CNT models (H = 5, λ = 0.3), using different types of constraints on SimLex999 (SL) and SimVerb-3500 (SV). We compare the standard model, which exploits both synonym and antonym pairs for creating training instances, with the models employing only synonym and only antonym constraints, respectively. Clearly, we obtain the best specialization when combining synonyms and antonyms. Note, however, that using 5This is why ATTRACT-REPEL cannot be applied in the lexically disjoint setting: the scores simply stay the same. 40 Setting: lexically disjoint Setting: lexical overlap GLOVE-CC FASTTEXT SGNS-W2 GLOVE-CC FASTTEXT SGNS-W2 SL SV SL SV SL SV SL SV SL SV SL SV Distributional (X) .407 .280 .383 .247 .414 .272 .407 .280 .383 .247 .414 .272 ATTRACT-REPEL .407 .280 .383 .247 .414 .272 .690 .578 .629 .502 .658 .544 ER-Specialized (X′ = f(X)) ER-MSD .483 .345 .429 .275 .445 .302 .500 .358 .445 .284 .469 .323 ER-CNT .582 .439 .433 .272 .435 .329 .623 .519 .419 .335 .449 .355 Table 1: Spearman’s ρ correlation scores for three standard English distributional vectors spaces on English SimLex-999 (SL) and SimVerb-3500 (SV), using explicit retrofitting models with two different objective functions (ER-MSD and ER-CNT, cf. Section 3.3). Constraints (ER-CNT model) SL SV Synonyms only .465 .339 Antonyms only .451 .317 Synonyms + Antonyms .582 .439 Table 2: Performance (ρ) on SL and SV for ERCNT models trained with different constraints. Figure 2: Specialization performance on SimLex999 (blue line) and SimVerb-3500 (red line) for ER models with different topology regularization factors λ. Dashed lines indicate performance levels of the distributional (i.e., unspecialized) space. only synonyms or only antonyms also improves over the original distributional space. Next, in Figure 2 we depict the specialization performance (on SimLex and SimVerb) of the ER models with different values of the topology regularization factor λ (H fixed to 5). The best performance for is obtained for λ = 0.3. Smaller lambda values overly distort the original distributional space, whereas larger lambda values dampen the specialization effects of linguistic constraints. 5.2 Language Transfer Readily available large collections of synonymy and antonymy word pairs do not exist for many languages. This is why we also investigate zeroshot specialization: we test if it is possible, with the help of cross-lingual word embeddings, to transfer the specialization knowledge learned from English constraints to languages without any training data. Evaluation Setup. We use the mapping model of Smith et al. (2017) to induce a multilingual vecModel German Italian Croatian Distributional (X) .407 .360 .249 ER-Specialized (X′) ER-MSD .415 .406 .287 ER-CNT .533 .448 .315 Table 3: Spearman’s ρ correlation scores for German, Italian, and Croatian embeddings in the transfer setup: the vectors are specialized using the models trained on English constraints and evaluated on respective language-specific SimLex-999 variants. tor space6 containing word vectors of three other languages – German, Italian, and Croatian – along with the English vectors.7 Concretely, we map the Italian CBOW vectors (Dinu et al., 2015), German FastText vectors trained on German Wikipedia (Bojanowski et al., 2017), and Croatian Skip-Gram vectors trained on HrWaC corpus (Ljubeˇsi´c and Erjavec, 2011) to the GLOVE-CC English space. We create the translation pairs needed to learn the projections by automatically translating 4,000 most frequent English words to all three other languages with Google Translate. We then employ the ER model trained to specialize the GLOVE-CC space using the full set of English constraints, to specialize the distributional spaces of other languages. We evaluate the quality of the specialized spaces on the respective SimLex-999 dataset for each language (Leviant and Reichart, 2015; Mrkˇsi´c et al., 2017). Results. The results are provided in Table 3. They indicate that the ER models can substantially improve (e.g., by 13% for German vector space) over distributional spaces also in the language transfer setup without seeing a single constraint in the target language. These transfer results hold promise to support vector space specialization 6This model was chosen for its ease of use, readily available implementation, and strong comparative results (see (Ruder et al., 2017)). For more details we refer the reader to the original paper and the survey. 7The choice of languages was determined by the availability of the language-specific SimLex-999 variants. 41 even for resource-lean languages. The more sophisticated contrastive ER-CNT model variant again outperforms the simpler ER-MSD variant, and it does so for all three languages, which is consistent with the findings from the monolingual English experiments (see Table 1). 5.3 Downstream Tasks We now evaluate the impact of our global ER method on two downstream tasks in which differentiating semantic similarity from semantic relatedness is particularly important: lexical text simplification (LS) and dialog state tracking (DST). 5.3.1 Lexical Text Simplification Lexical simplification aims to replace complex words – used less frequently and known to fewer speakers – with their simpler synonyms that fit into the context, that is, without changing the meaning of the original text. Because retaining the meaning of the original text is a strict requirement, complex words need to be replaced with semantically similar words, whereas replacements with semantically related words (e.g., replacing “pilot” with “airplane” in “Ferrari’s pilot won the race”) produce incorrect text which is more difficult to comprehend. Simplification Using Distributional Vectors. We use the LIGHT-LS lexical simplification algorithm of Glavaˇs and ˇStajner (2015) which makes the word replacement decisions primarily based on semantic similarities between words in a distributional vector space.8 For each word in the input text LIGHT-LS retrieves most similar replacement candidates from the vector space. The candidates are then ranked according to several measures of simplicity and fitness for the context. Finally, the replacement is made if the top-ranked candidate is estimated to be simpler than the original word. By plugging-in vector spaces specialized by the ER model into LIGHT-LS, we hope to generate true synonymous candidates more frequently than with the unspecialized distributional space. Evaluation Setup. We evaluate LIGHT-LS on the LS dataset crowdsourced by Horn et al. (2014). For each indicated complex word Horn et al. (2014) collected 50 manual simplifications. We use two evaluation metrics from prior work (Horn et al., 2014; Glavaˇs and ˇStajner, 2015) to quantify the quality and frequency of word replacements: (1) 8The Light-LS implementation is available at: https://bitbucket.org/gg42554/embesimp GLOVE-CC FASTTEXT SGNS-W2 Emb. space A C A C A C Distributional 66.0 94.0 57.8 84.0 56.0 79.1 Specialized ATTRACT-REPEL 67.6 87.0 69.8 89.4 64.4 86.7 ER-CNT 73.8 93.0 71.2 93.2 68.4 92.3 Table 4: Lexical simplification performance with explicit retrofitting applied on three input spaces. accurracy (A) is the number of correct simplifications made (i.e., when the replacement made by the system is found in the list of manual replacements) divided by the total number of indicated complex words; and (2) change (C) is the percentage of indicated complex words that were replaced by the system (regardless of whether the replacement was correct). We plug into LIGHT-LS both unspecialized and specialized variants of three previously used English embedding spaces: GLOVECC, FASTTEXT, and SGNS-W2. Additionally, we again evaluate specializations of the same spaces produced by the state-of-the-art local retrofitting model ATTRACT-REPEL (Mrkˇsi´c et al., 2017). Results and Analysis. The results with LIGHTLS are summarized in Table 4. ER-CNT model yields considerable gains over unspecialized spaces for both metrics. This suggests that the ER-specialized embedding spaces allow LIGHTLS to generate true synonymous candidate replacements more often than with unspecialized spaces, and also verifies the importance of specialization for the LS task. Our ER-CNT model now also yields better results than ATTRACT-REPEL in a real-world downstream task. Only 59.6 % of all indicated complex words and manual replacement candidates from the LS dataset are now covered by the linguistic constraints. This accentuates the need to specialize the full distributional space in downstream applications as done by the ER model, while ATTRACT-REPEL is limited to local vector updates only of words seen in the constraints. By learning a global specialization function the proposed ER models seem more resilient to the observed drop in coverage of test words by linguistic constraints. Table 5 shows example substitutions of LIGHT-LS when using different embedding spaces: original GLOVE-CC space and its specializations obtained with ER-CNT and ATTRACT-REPEL. 5.3.2 Dialog State Tracking Finally, we also evaluate the importance of explicit retrofitting in a downstream language understand42 Text GLOVE-CC ATTRACT-REPEL ER-CNT Wrestlers portrayed a villain or a hero as they followed a series of events that built tension character protagonist demon This large version number jump was due to a feeling that a version 1.0 with no major missing pieces was imminent. ones songs parts The storm continued, crossing North Carolina , and retained its strength until June 20 when it became extratropical near Newfoundland lost preserved preserved Tibooburra has an arid, desert climate with temperatures soaring above 40 Celsius in summer, often reaching as high as 47 degrees Celsius. subtropical humid dry Table 5: Examples of lexical simplifications performed with the Light-LS tool when using different embedding spaces. The target word to be simplified is in bold. GLOVE-CC embedding vectors JGA Distributional (X) .797 Specialized (X′ = f(X)) ATTRACT-REPEL .817 ER-CNT .816 Table 6: DST performance of GLOVE-CC embeddings specialized using explicit retrofitting. ing task, namely dialog state tracking (DST) (Henderson et al., 2014; Williams et al., 2016). A DST model is typically the first component of a dialog system pipeline (Young, 2010), tasked with capturing user’s goals and updating the dialog state at each dialog turn. Similarly as in lexical simplification, discerning similarity from relatedness is crucial in DST (e.g., a dialog system should not recommend an “expensive pub in the south” when asked for a “cheap bar in the east”). Evaluation Setup. To evaluate the impact of specialized word vectors on DST, we employ the Neural Belief Tracker (NBT), a DST model that makes inferences purely based on pre-trained word vectors (Mrkˇsi´c et al., 2017).9 NBT composes word embeddings into intermediate utterance and context representations. For full model details, we refer the reader to the original paper. Following prior work, our DST evaluation is based on the Wizard-of-Oz (WOZ) v2.0 dataset (Wen et al., 2017; Mrkˇsi´c et al., 2017) which contains 1,200 dialogs (600 training, 200 validation, and 400 test dialogs). We evaluate performance of the distributional and specialized GLOVE-CC embeddings and report it in terms of joint goal accuracy (JGA), a standard DST evaluation metric. All reported results are averages over 5 runs of the NBT model. Results. We show DST performance in Table 6. The DST results tell a similar story like word similarity and lexical simplification results – the ER 9https://github.com/nmrksic/neural-belief-tracker model substantially improves over the distributional space. With linguistic specialization constraints covering 57% of words from the WOZ dataset, ER model’s performance is on a par with the ATTRACT-REPEL specialization. This further confirms our hypothesis that the importance of learning a global specialization for the full vocabulary in downstream tasks grows with the drop of the test word coverage by specialization constraints. 6 Conclusion We presented a novel method for specializing word embeddings to better discern similarity from other types of semantic relatedness. Unlike existing retrofitting models, which directly update vectors of words from external constraints, we use the constraints as training examples to learn an explicit specialization function, implemented as a deep feedforward neural network. Our global specialization approach resolves the well-known inability of retrofitting models to specialize vectors of words unseen in the constraints. We demonstrated the effectiveness of the proposed model on word similarity benchmarks, and in two downstream tasks: lexical simplification and dialog state tracking. We also showed that it is possible to transfer the specialization to languages without linguistic constraints. In future work, we will investigate explicit retrofitting methods for asymmetric relations like hypernymy and meronymy. We also intend to apply the method to other downstream tasks and to investigate the zero-shot language transfer of the specialization function for more language pairs. ER code is publicly available at: https:// github.com/codogogo/explirefit. Acknowledgments Ivan Vuli´c is supported by the ERC Consolidator Grant LEXICAL (no. 648909). 43 References Rami Al-Rfou, Bryan Perozzi, and Steven Skiena. 2013. Polyglot: Distributed word representations for multilingual NLP. In Proceedings of CoNLL, pages 183–192. Collin F. Baker, Charles J. Fillmore, and John B. Lowe. 1998. The Berkeley FrameNet project. In Proceedings of ACL, pages 86–90. Jiang Bian, Bin Gao, and Tie-Yan Liu. 2014. Knowledge-powered deep learning for word embedding. In Proceedings of ECML-PKDD, pages 132– 148. Piotr Bojanowski, Edouard Grave, Armand Joulin, and Tomas Mikolov. 2017. Enriching word vectors with subword information. Transactions of the ACL, 5:135–146. Danushka Bollegala, Mohammed Alsuhaibani, Takanori Maehara, and Ken-ichi Kawarabayashi. 2016. Joint word representation learning using a corpus and a semantic lexicon. In Proceedings of AAAI, pages 2690–2696. Elia Bruni, Nam-Khanh Tran, and Marco Baroni. 2014. Multimodal distributional semantics. Journal of Artificial Intelligence Research, 49:1–47. Danqi Chen and Christopher D. Manning. 2014. A fast and accurate dependency parser using neural networks. In Proceedings of EMNLP, pages 740–750. Ronan Collobert, Jason Weston, L´eon Bottou, Michael Karlen, Koray Kavukcuoglu, and Pavel P. Kuksa. 2011. Natural language processing (almost) from scratch. Journal of Machine Learning Research, 12:2493–2537. Ryan Cotterell, Hinrich Sch¨utze, and Jason Eisner. 2016. Morphological smoothing and extrapolation of word embeddings. In Proceedings of ACL, pages 1651–1660. Paramveer S. Dhillon, Dean P. Foster, and Lyle H. Ungar. 2015. Eigenwords: Spectral word embeddings. Journal of Machine Learning Research, 16:3035– 3078. Georgiana Dinu, Angeliki Lazaridou, and Marco Baroni. 2015. Improving zero-shot learning by mitigating the hubness problem. In Proceedings of ICLR: Workshop Papers. Manaal Faruqui, Jesse Dodge, Sujay Kumar Jauhar, Chris Dyer, Eduard Hovy, and Noah A. Smith. 2015. Retrofitting word vectors to semantic lexicons. In Proceedings of NAACL-HLT, pages 1606–1615. Manaal Faruqui, Yulia Tsvetkov, Graham Neubig, and Chris Dyer. 2016. Morphological inflection generation using character sequence to sequence learning. In Proceedings of NAACL-HLT, pages 634–643. Christiane Fellbaum. 1998. WordNet. Lev Finkelstein, Evgeniy Gabrilovich, Yossi Matias, Ehud Rivlin, Zach Solan, Gadi Wolfman, and Eytan Ruppin. 2002. Placing search in context: The concept revisited. ACM Transactions on Information Systems, 20(1):116–131. Juri Ganitkevitch, Benjamin Van Durme, and Chris Callison-Burch. 2013. PPDB: The Paraphrase Database. In Proceedings of NAACL-HLT, pages 758–764. Daniela Gerz, Ivan Vuli´c, Felix Hill, Roi Reichart, and Anna Korhonen. 2016. SimVerb-3500: A largescale evaluation set of verb similarity. In Proceedings of EMNLP, pages 2173–2182. Goran Glavaˇs and Simone Paolo Ponzetto. 2017. Dual tensor model for detecting asymmetric lexicosemantic relations. In Proceedings of EMNLP, pages 1758–1768. Goran Glavaˇs and Sanja ˇStajner. 2015. Simplifying lexical simplification: Do we need simplified corpora? In Proceedings of ACL, pages 63–68. Zellig S. Harris. 1954. Distributional structure. Word, 10(23):146–162. Matthew Henderson, Blaise Thomson, and Jason D. Wiliams. 2014. The Second Dialog State Tracking Challenge. In Proceedings of SIGDIAL, pages 263– 272. Felix Hill, Roi Reichart, and Anna Korhonen. 2015. SimLex-999: Evaluating semantic models with (genuine) similarity estimation. Computational Linguistics, 41(4):665–695. Colby Horn, Cathryn Manduca, and David Kauchak. 2014. Learning a lexical simplifier using wikipedia. In Proceedings of the ACL, pages 458–463. Sujay Kumar Jauhar, Chris Dyer, and Eduard H. Hovy. 2015. Ontologically grounded multi-sense representation learning for semantic vector space models. In Proceedings of NAACL, pages 683–693. Douwe Kiela, Felix Hill, and Stephen Clark. 2015. Specializing word embeddings for similarity or relatedness. In Proceedings of EMNLP, pages 2044– 2048. Joo-Kyung Kim, Marie-Catherine de Marneffe, and Eric Fosler-Lussier. 2016a. Adjusting word embeddings with semantic intensity orders. In Proceedings of the 1st Workshop on Representation Learning for NLP, pages 62–69. Joo-Kyung Kim, Gokhan Tur, Asli Celikyilmaz, Bin Cao, and Ye-Yi Wang. 2016b. Intent detection using semantically enriched word embeddings. In Proceedings of SLT. Diederik P. Kingma and Jimmy Ba. 2015. Adam: A method for stochastic optimization. In Proceedings of ICLR (Conference Track). 44 Barbara Ann Kipfer. 2009. Roget’s 21st Century Thesaurus (3rd Edition). Philip Lief Group. Ira Leviant and Roi Reichart. 2015. Separated by an un-common language: Towards judgment language informed vector space modeling. CoRR, abs/1508.00106. Omer Levy and Yoav Goldberg. 2014a. Dependencybased word embeddings. In Proceedings of ACL, pages 302–308. Omer Levy and Yoav Goldberg. 2014b. Dependencybased word embeddings. In Proceedings of ACL, pages 302–308. Omer Levy, Yoav Goldberg, and Ido Dagan. 2015. Improving distributional similarity with lessons learned from word embeddings. Transactions of the ACL, 3:211–225. Quan Liu, Hui Jiang, Si Wei, Zhen-Hua Ling, and Yu Hu. 2015. Learning semantic word embeddings based on ordinal knowledge constraints. In Proceedings of ACL, pages 1501–1511. Nikola Ljubeˇsi´c and Tomaˇz Erjavec. 2011. hrWaC and slWaC: Compiling web corpora for croatian and slovene. In Proceedings of TSD, pages 395–402. Oren Melamud, David McClosky, Siddharth Patwardhan, and Mohit Bansal. 2016. The role of context types and dimensionality in learning word embeddings. In Proceedings of NAACL-HLT, pages 1030– 1040. Tomas Mikolov, Quoc V. Le, and Ilya Sutskever. 2013a. Exploiting similarities among languages for machine translation. arXiv preprint, CoRR, abs/1309.4168. Tomas Mikolov, Ilya Sutskever, Kai Chen, Gregory S. Corrado, and Jeffrey Dean. 2013b. Distributed representations of words and phrases and their compositionality. In Proceedings of NIPS, pages 3111– 3119. Saif M. Mohammad, Bonnie J. Dorr, Graeme Hirst, and Peter D. Turney. 2013. Computing lexical contrast. Computational Linguistics, 39(3):555–590. Nikola Mrkˇsi´c, Diarmuid ´O S´eaghdha, Tsung-Hsien Wen, Blaise Thomson, and Steve Young. 2017. Neural belief tracker: Data-driven dialogue state tracking. In Proceedings of ACL, pages 1777–1788. Nikola Mrkˇsi´c, Diarmuid ´O S´eaghdha, Blaise Thomson, Milica Gaˇsi´c, Lina Maria Rojas-Barahona, PeiHao Su, David Vandyke, Tsung-Hsien Wen, and Steve Young. 2016. Counter-fitting word vectors to linguistic constraints. In Proceedings of NAACLHLT. Nikola Mrkˇsi´c, Ivan Vuli´c, Diarmuid ´O S´eaghdha, Ira Leviant, Roi Reichart, Milica Gaˇsi´c, Anna Korhonen, and Steve Young. 2017. Semantic specialisation of distributional word vector spaces using monolingual and cross-lingual constraints. Transactions of the ACL, 5:309–324. Roberto Navigli and Simone Paolo Ponzetto. 2012. BabelNet: The automatic construction, evaluation and application of a wide-coverage multilingual semantic network. Artificial Intelligence, 193:217–250. Kim Anh Nguyen, Maximilian K¨oper, Sabine Schulte im Walde, and Ngoc Thang Vu. 2017. Hierarchical embeddings for hypernymy detection and directionality. In Proceedings of EMNLP, pages 233–243. Kim Anh Nguyen, Sabine Schulte im Walde, and Ngoc Thang Vu. 2016. Integrating distributional lexical contrast into word embeddings for antonymsynonym distinction. In Proceedings of ACL, pages 454–459. Masataka Ono, Makoto Miwa, and Yutaka Sasaki. 2015. Word embedding-based antonym detection using thesauri and distributional information. In Proceedings of NAACL-HLT, pages 984–989. Dominique Osborne, Shashi Narayan, and Shay Cohen. 2016. Encoding prior knowledge with eigenword embeddings. Transactions of the ACL, 4:417–430. Ellie Pavlick, Pushpendre Rastogi, Juri Ganitkevitch, Benjamin Van Durme, and Chris Callison-Burch. 2015. PPDB 2.0: Better paraphrase ranking, finegrained entailment relations, word embeddings, and style classification. In Proceedings of ACL, pages 425–430. Jeffrey Pennington, Richard Socher, and Christopher Manning. 2014. Glove: Global vectors for word representation. In Proceedings of EMNLP, pages 1532– 1543. Sascha Rothe and Hinrich Sch¨utze. 2015. AutoExtend: Extending word embeddings to embeddings for synsets and lexemes. In Proceedings of ACL, pages 1793–1803. Sebastian Ruder, Ivan Vuli´c, and Anders Søgaard. 2017. A survey of cross-lingual embedding models. CoRR, abs/1706.04902. Roy Schwartz, Roi Reichart, and Ari Rappoport. 2015. Symmetric pattern based word embeddings for improved word similarity prediction. In Proceedings of CoNLL, pages 258–267. Samuel L. Smith, David H.P. Turban, Steven Hamblin, and Nils Y. Hammerla. 2017. Offline bilingual word vectors, orthogonal transformations and the inverted softmax. In Proceedings of ICLR (Conference Track). 45 Ivan Vuli´c and Nikola Mrkˇsi´c. 2017. Specialising word vectors for lexical entailment. CoRR, abs/1710.06371. Ivan Vuli´c, Nikola Mrkˇsi´c, and Anna Korhonen. 2017a. Cross-lingual induction and transfer of verb classes based on word vector space specialisation. In Proceedings of EMNLP, pages 2536–2548. Ivan Vuli´c, Nikola Mrkˇsi´c, Roi Reichart, Diarmuid ´O S´eaghdha, Steve Young, and Anna Korhonen. 2017b. Morph-fitting: Fine-tuning word vector spaces with simple language-specific rules. In Proceedings of ACL, pages 56–68. Tsung-Hsien Wen, David Vandyke, Nikola Mrkˇsi´c, Milica Gaˇsi´c, Lina M. Rojas-Barahona, Pei-Hao Su, Stefan Ultes, and Steve Young. 2017. A networkbased end-to-end trainable task-oriented dialogue system. In Proceedings of EACL. John Wieting, Mohit Bansal, Kevin Gimpel, and Karen Livescu. 2015. From paraphrase database to compositional paraphrase model and back. Transactions of the ACL, 3:345–358. Jason D. Williams, Antoine Raux, and Matthew Henderson. 2016. The Dialog State Tracking Challenge series: A review. Dialogue & Discourse, 7(3):4–33. Chang Xu, Yalong Bai, Jiang Bian, Bin Gao, Gang Wang, Xiaoguang Liu, and Tie-Yan Liu. 2014. RCNET: A general framework for incorporating knowledge into word representations. In Proceedings of CIKM, pages 1219–1228. Wen-tau Yih, Geoffrey Zweig, and John C. Platt. 2012. Polarity inducing latent semantic analysis. In EMNLP-CoNLL, pages 1212–1222. Steve Young. 2010. Cognitive User Interfaces. IEEE Signal Processing Magazine. Mo Yu and Mark Dredze. 2014. Improving lexical embeddings with semantic knowledge. In Proceedings of ACL, pages 545–550. Jingwei Zhang, Jeremy Salwen, Michael Glass, and Alfio Gliozzo. 2014. Word semantic representations using bayesian probabilistic tensor factorization. In Proceedings of EMNLP, pages 1522–1531.
2018
4
Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Long Papers), pages 429–439 Melbourne, Australia, July 15 - 20, 2018. c⃝2018 Association for Computational Linguistics 429 Discourse Representation Structure Parsing Jiangming Liu Shay B. Cohen Mirella Lapata Institute for Language, Cognition and Computation School of Informatics, University of Edinburgh 10 Crichton Street, Edinburgh EH8 9AB [email protected], [email protected], [email protected] Abstract We introduce an open-domain neural semantic parser which generates formal meaning representations in the style of Discourse Representation Theory (DRT; Kamp and Reyle 1993). We propose a method which transforms Discourse Representation Structures (DRSs) to trees and develop a structure-aware model which decomposes the decoding process into three stages: basic DRS structure prediction, condition prediction (i.e., predicates and relations), and referent prediction (i.e., variables). Experimental results on the Groningen Meaning Bank (GMB) show that our model outperforms competitive baselines by a wide margin. 1 Introduction Semantic parsing is the task of mapping natural language to machine interpretable meaning representations. A variety of meaning representations have been adopted over the years ranging from functional query language (FunQL; Kate et al. 2005) to dependency-based compositional semantics (λ-DCS; Liang et al. 2011), lambda calculus (Zettlemoyer and Collins, 2005), abstract meaning representations (Banarescu et al., 2013), and minimal recursion semantics (Copestake et al., 2005). Existing semantic parsers are for the most part data-driven using annotated examples consisting of utterances and their meaning representations (Zelle and Mooney, 1996; Wong and Mooney, 2006; Zettlemoyer and Collins, 2005). The successful application of encoder-decoder models (Sutskever et al., 2014; Bahdanau et al., 2015) to a variety of NLP tasks has provided strong impetus to treat semantic parsing as a sequence transduction problem where an utterance is mapped to a target meaning representation in string format (Dong and Lapata, 2016; Jia and Liang, 2016; Koˇcisk`y et al., 2016). The fact that meaning representations do not naturally conform to a linear ordering has also prompted efforts to develop recurrent neural network architectures tailored to tree or graph-structured decoding (Dong and Lapata, 2016; Cheng et al., 2017; Yin and Neubig, 2017; Alvarez-Melis and Jaakkola, 2017; Rabinovich et al., 2017; Buys and Blunsom, 2017) Most previous work focuses on building semantic parsers for question answering tasks, such as querying a database to retrieve an answer (Zelle and Mooney, 1996; Cheng et al., 2017), or conversing with a flight booking system (Dahl et al., 1994). As a result, parsers trained on query-based datasets work on restricted domains (e.g., restaurants, meetings; Wang et al. 2015), with limited vocabularies, exhibiting limited compositionality, and a small range of syntactic and semantic constructions. In this work, we focus on open-domain semantic parsing and develop a general-purpose system which generates formal meaning representations in the style of Discourse Representation Theory (DRT; Kamp and Reyle 1993). DRT is a popular theory of meaning representation designed to account for a variety of linguistic phenomena, including the interpretation of pronouns and temporal expressions within and across sentences. Advantageously, it supports meaning representations for entire texts rather than isolated sentences which in turn can be translated into firstorder logic. The Groningen Meaning Bank (GMB; Bos et al. 2017) provides a large collection of English texts annotated with Discourse Representation Structures (see Figure 1 for an example). GMB integrates various levels of semantic annotation (e.g., anaphora, named entities, thematic roles, rhetorical relations) into a unified formalism providing expressive meaning representations for open-domain texts. We treat DRT parsing as a structure prediction problem. We develop a method to transform DRSs to tree-based representations which can be further linearized to bracketed string format. We examine a series of encoder-decoder models (Bahdanau et al., 2015) differing in the way tree430 x1,e1,π1 statement(x1), say(e1), Cause(e1, x1), Topic(e1,π1) π1: k1: x2 thing(x) ⇒ x3, s1, x3, x5, e2 Topic(s1, x3), dead(s1), man(x3), of(x2, x3), magazine(x4), on(x5,x4) vest(x5), wear(e2), Agent(e2, x2), Theme(e2, x5) k2: x6 thing(x6) ⇒ x7, s2, x8, x9, e3 Topic(s2, x7), dead(s2), man(x7), of(x6, x7), |x8| = 2, hand(x9), in(x8, x9), grenade(x8) carry(e3), Agent(e3, x6), Theme(e3, x8) continuation(k1,k2), parallel(k1,k2) Figure 1: DRT meaning representation for the sentence The statement says each of the dead men wore magazine vests and carried two hand grenades. structured logical forms are generated and show that a structure-aware decoder is paramount to open-domain semantic parsing. Our proposed model decomposes the decoding process into three stages. The first stage predicts the structure of the meaning representation omitting details such as predicates or variable names. The second stage fills in missing predicates and relations (e.g., thing, Agent) conditioning on the natural language input and the previously predicted structure. Finally, the third stage predicts variable names based on the input and the information generated so far. Decomposing decoding into these three steps reduces the complexity of generating logical forms since the model does not have to predict deeply nested structures, their variables, and predicates all at once. Moreover, the model is able to take advantage of the GMB annotations more efficiently, e.g., examples with similar structures can be effectively used in the first stage despite being very different in their lexical make-up. Finally, a piecemeal mode of generation yields more accurate predictions; since the output of every decoding step serves as input to the next one, the model is able to refine its predictions taking progressively more global context into account. Experimental results on the GMB show that our three-stage decoder outperforms a vanilla encoder-decoder model and a related variant which takes shallow structure into account, by a wide margin. Our contributions in this work are three-fold: an open-domain semantic parser which yields discourse representation structures; a novel end-toend neural model equipped with a structured decoder which decomposes the parsing process into three stages; a DRS-to-tree conversion method which transforms DRSs to tree-based representations allowing for the application of structured decoders as well as sequential modeling. We release our code1 and tree formatted version of the GMB in the hope of driving further research in opendomain semantic parsing. 2 Discourse Representation Theory In this section we provide a brief overview of the representational semantic formalism used in the GMB. We refer the reader to Bos et al. (2017) and Kamp and Reyle (1993) for more details. Discourse Representation Theory (DRT; Kamp and Reyle 1993) is a general framework for representing the meaning of sentences and discourse which can handle multiple linguistic phenomena including anaphora, presuppositions, and temporal expressions. The basic meaning-carrying units in DRT are Discourse Representation Structures (DRSs), which are recursive formal meaning structures that have a model-theoretic interpretation and can be translated into first-order logic (Kamp and Reyle, 1993). Basic DRSs consist of discourse referents (e.g., x,y) representing entities in the discourse and discourse conditions (e.g., man(x), magazine(y)) representing information about discourse referents. Following conventions in the DRT literature, we visualize DRSs in a box-like format (see Figure 1). GMB adopts a variant of DRT that uses a neoDavidsonian analysis of events (Kipper et al., 2008), i.e., events are first-order entities characterized by one-place predicate symbols (e.g., say(e1) in Figure 1). In addition, it follows Projective Discourse Representation Theory (PDRT; Venhuizen et al. 2013) an extension of DRT specifically developed to account for the interpretation of presuppositions and related projection phenomena 1https://github.com/EdinburghNLP/EncDecDRSparsing 431 (e.g., conventional implicatures). In PDRT, each basic DRS introduces a label, which can be bound by a pointer indicating the interpretation site of semantic content. To account for the rhetorical structure of texts, GMB adopts Segmented Discourse Representation Theory (SDRT; Asher and Lascarides 2003). In SDRT, discourse segments are linked with rhetorical relations reflecting different characteristics of textual coherence, such as temporal order and communicative intentions (see continuation(k1, k2) in Figure 1). More formally, DRSs are expressions of type ⟨expe⟩(denoting individuals or discourse referents) and ⟨expt⟩(i.e., truth values): ⟨expe⟩::= ⟨ref⟩, ⟨expt⟩::= ⟨drs⟩|⟨sdrs⟩, (1) discourse referents ⟨re f⟩are in turn classified into six categories, namely common referents (xn), event referents (en), state referents (sn), segment referents (kn), proposition referents (πn), and time referents (tn). ⟨drs⟩and ⟨sdrs⟩denote basic and segmented DRSs, respectively: ⟨drs⟩::= ⟨pvar⟩: (⟨pvar⟩,⟨ref⟩)∗ (⟨pvar⟩,⟨condition⟩)∗, (2) ⟨sdrs⟩::= k1 : ⟨expt⟩,k2 : ⟨expt⟩ coo(k1,k2) | k1:⟨expt⟩ k2:⟨expt⟩ sub(k1,k2) , (3) Basic DRSs consist of a set of referents (⟨ref⟩) and conditions (⟨condition⟩), whereas segmented DRSs are recursive structures that combine two ⟨expt⟩by means of coordinating (coo) or subordinating (sub) relations. DRS conditions can be basic or complex: ⟨condition⟩::= ⟨basic⟩|⟨complex⟩, (4) Basic conditions express properties of discourse referents or relations between them: ⟨basic⟩::= ⟨sym1⟩(⟨expe⟩) | ⟨sym2⟩(⟨expe⟩,⟨expe⟩) | ⟨expe⟩= ⟨expe⟩| ⟨expe⟩= ⟨num⟩ | timex(⟨expe⟩,⟨sym0⟩) | named(⟨expe⟩,⟨sym0⟩,class). (5) where ⟨symn⟩denotes n-place predicates, ⟨num⟩ denotes cardinal numbers, timex expresses temporal information (e.g., timex(x7,2005) denotes the year 2005), and class refers to named entity classes (e.g., location). Complex conditions are unary or binary. Unary conditions have one DRS as argument and represent negation (¬) and modal operators expressing necessity (2) and possibility (3). Condition sections # doc # sent # token avg 00-99 10,000 62,010 1,354,149 21.84 20-99 7,970 49,411 1,078,953 21.83 10-19 1,038 6,483 142,344 21.95 00-09 992 6,116 132,852 21.72 Table 1: Statistics on the GMB (avg denotes the average number of tokens per sentence). ⟨ref⟩: ⟨expt⟩represents verbs with propositional content (e.g., factive verbs). Binary conditions are conditional statements (→) and questions. ⟨complex⟩::= ⟨unary⟩| ⟨binary⟩, (6) ⟨unary⟩::= ¬⟨expt⟩| 2⟨expt⟩|3⟨expt⟩|⟨re f⟩: ⟨expt⟩ ⟨binary⟩::=⟨expt⟩→⟨expt⟩|⟨expt⟩∨⟨expt⟩|⟨expt⟩?⟨expt⟩ 3 The Groningen Meaning Bank Corpus Corpus Creation DRSs in GMB were obtained from Boxer (Bos, 2008, 2015), and then refined using expert linguists and crowdsourcing methods. Boxer constructs DRSs based on a pipeline of tools involving POS-tagging, named entity recognition, and parsing. Specifically, it relies on the syntactic analysis of the C&C parser (Clark and Curran, 2007), a general-purpose parser using the framework of Combinatory Categorial Grammar (CCG; Steedman 2001). DRSs are obtained from CCG parses, with semantic composition being guided by the CCG syntactic derivation. Documents in the GMB were collected from a variety of sources including Voice of America (a newspaper published by the US Federal Government), the Open American National Corpus, Aesop’s fables, humorous stories and jokes, and country descriptions from the CIA World Factbook. The dataset consists of 10,000 documents each annotated with a DRS. Various statistics on the GMB are shown in Table 1. Bos et al. (2017) recommend sections 20–99 for training, 10–19 for tuning, and 00–09 for testing. DRS-to-Tree Conversion As mentioned earlier, DRSs in the GMB are displayed in a box-like format which is intuitive and easy to read but not particularly amenable to structure modeling. In this section we discuss how DRSs were post-processed and simplified into a tree-based format, which served as input to our models. The GMB provides DRS annotations perdocument. Our initial efforts have focused on sentence-level DRS parsing which is undoubtedly 432 a necessary first step for more global semantic representations. It is relatively, straightforward to obtain sentence-level DRSs from document-level annotations since referents and conditions are indexed to tokens. We match each sentence in a document with the DRS whose content bears the same indices as the tokens occurring in the sentence. This matching process yields 52,268 sentences for training (sections 20–99), 5,172 sentences for development (sections 10–19), (development), and 5,440 sentences for testing (sections 00–09). In order to simplify the representation, we omit referents in the top part of the DRS (e.g., x1, e1 and π1 in Figure 1) but preserve them in conditions without any information loss. Also we ignore pointers to DRSs since this information is implicitly captured through the typing and co-indexing of referents. Definition (1) is simplified to: ⟨drs⟩::= DRS(⟨condition⟩∗), (7) where DRS() denotes a basic DRS. We also modify discourse referents to SDRSs (e.g., k1, k2 in Figure 1) which we regard as elements bearing scope over expressions ⟨expt⟩and add a 2-place predicate ⟨sym2⟩to describe the discourse relation between them. So, definition (3) becomes: ⟨sdrs⟩::=SDRS((⟨ref⟩(⟨expt⟩))∗ (8) (⟨sym2⟩(⟨ref⟩,⟨ref⟩))∗), where SDRS() denotes a segmented DRS, and ⟨re f⟩are segment referents. We treat cardinal numbers ⟨num⟩and ⟨sym0⟩ in relation timex as constants. We introduce the binary predicate “card” to represent cardinality (e.g., |x8| = 2 is card(x8,NUM)). We also simplify ⟨expe⟩= ⟨expe⟩to eq(⟨expe⟩,⟨expe⟩) using the binary relation “eq” (e.g., x1 = x2 becomes eq(x1,x2)). Moreover, we ignore class in named and transform named(⟨expe⟩,⟨sym0⟩,class) into ⟨sym1⟩(⟨expe⟩) (e.g., named(x2,mongolia,geo) becomes mongolia(x2)). Consequently, basic conditions (see definition (5)) are simplified to: ⟨basic⟩::= ⟨sym1⟩(⟨expe⟩)|⟨sym2⟩(⟨expe⟩,⟨expe⟩) (9) Analogously, we treat unary and binary conditions as scoped functions, and definition (6) becomes: ⟨unary⟩::= ¬ | 2 | 3 | ⟨ref⟩(⟨expt⟩) ⟨binary⟩::= →| ∨| ?(⟨expt⟩,⟨expt⟩), (10) Following the transformations described above, the DRS in Figure 1 is converted into the tree in DRS statement(x1) say(e1) Cause(e1,x1) Topic(e1,π1) π1 SDRS k1 DRS =⇒ DRS thing(x2) DRS Topic(s1,x3) ... Theme(e2,x5) k2 DRS =⇒ DRS thing(x6) DRS Topic(s2,x7) ... Theme(e3,x8) continuation(k1,k2) parallel(k1,k2) DRS(statement(x1) say(e1) Cause(e1,x1) Topic(e1,π1) π1(SDRS(k1 (DRS (=⇒(DRS(thing(x2)) DRS (Topic(s1,x3) dead(s1) man(x3) of(x2,x3) magazine(x4) on(x5,x4) vest(x5) wear(e2) Agent(e2,x2) Theme(e2,x5))))) k2(DRS =⇒(DRS(thing(x6)) DRS(Topic(s2,x7) dead(s2) man(x7) of(x6,x7) card(x8,NUM) hand(x9) in(x8,x9) carry(e3) Agent(e3,x6) Theme(e3,x8))))) continuation(k1,k2) parallel(k1,k2) Figure 2: Tree-based representation (top) of the DRS in Figure 1 and its linearization (bottom). Figure 2, which can be subsequently linearized into a PTB-style bracketed sequence. It is important to note that the conversion does not diminish the complexity of DRSs. The average tree width in the training set is 10.39 and tree depth is 4.64. 4 Semantic Parsing Models We present below three encoder-decoder models which are increasingly aware of the structure of the DRT meaning representations. The models take as input a natural language sentence X represented as w1,w2,... ,wn, and generate a sequence Y = (y1,y2,...,ym), which is a linearized tree (see Figure 2 bottom), where n is the length of the sentence, and m the length of the generated DRS sequence. We aim to estimate p(Y|X), the conditional probability of the semantic parse tree Y given natural language input X: p(Y|X) = ∏ j p(yj|Y j−1 1 ,Xn 1 ) 4.1 Encoder An encoder is used to represent the natural language input X into vector representations. Each token in a sentence is represented by a vector xk which is the concatenation of randomly initialized embeddings ewi, pre-trained word embeddings ¯ewi, and lemma embeddings eli: xk = tanh([ewi; ¯ewi;eli] ∗W1 + b1), where W1 ∈RD and D is a shorthand for (dw + dp + dl) × dinput (subscripts w, p, and l denote the dimensions of word embeddings, pre-trained embeddings, and lemma embeddings, respectively); b1 ∈Rdinput and the symbol ; denotes concatenation. Embeddings ewi 433 and eli are randomly initialized and tuned during training, while ¯ewi are fixed. We use a bidirectional recurrent neural network with long short-term memory units (bi-LSTM; Hochreiter and Schmidhuber 1997) to encode natural language sentences: [he1 : hen] = bi-LSTM(x1 : xn), where hei denotes the hidden representation of the encoder, and xi refers to the input representation of the ith token in the sentence. Table 2 summarizes the notation used throughout this paper. 4.2 Sequence Decoder We employ a sequential decoder (Bahdanau et al., 2015) as our baseline model with the architecture shown in Figure 3(a). Our decoder is a (forward) LSTM, which is conditionally initialized with the hidden state of the encoder, i.e., we set hd0 = hen and cd0 = cen, where c is a memory cell: hd j = LSTM(eyj−1), where hd j denotes the hidden representation of yj, eyj are randomly initialized embeddings tuned during training, and y0 denotes the start of sequence. The decoder uses the contextual representation of the encoder together with the embedding of the previously predicted token to output the next token from the vocabulary V: sj = [hct j;eyj−1]∗W2 +b2, where W2 ∈R(denc+dy)×|V|, b2 ∈R|V|, denc and dy are the dimensions of the encoder hidden unit and output representation, respectively, and hct j is obtained using an attention mechanism: hct j = n ∑ i=1 βjihei, where the weight β ji is computed by: βji = ef(hdj,hei) ∑k ef(hdj,hek) , and f is the dot-product function. We obtain the probability distribution over the output tokens as: pj = p(yj|Y j−1 1 ,Xn 1 ) = SOFTMAX(sj) Symbol Description X; Y sequence of words; outputs wi; yi the ith word; output X j i ; Y j i word; output sequence from position i to j ewi; eyi random embedding of word wi; of output yi ¯ewi fixed pretrained embedding of word wi eli random embedding for lemma li dw dimension of random word embedding dp dimension of pretrained word embedding dl the dimension of random lemma embedding dinput input dimension of encoder denc; ddec hidden dimension of encoder; decoder Wi matrix of model parameters bi vector of model parameters xi representation of ith token hei hidden representation of ith token cei memory cell of ith token in encoder hdi hidden representation of ith token in decoder cdi memory cell of ith token in decoder s j score vector of jth output in decoder hct j context representation of jth output βi j alignment from jth output to ith token oi j copy score of jth output from ith token ˆ indicates tree structure (e.g. ˆY, ˆyi, ˆs j) ¯ indicates DRS conditions (e.g. ¯Y, ¯yi, ¯s j) ˙ indicates referents (e.g. ˙Y, ˙yi, ˙s j) Table 2: Notation used throughout this paper. 4.3 Shallow Structure Decoder The baseline decoder treats all conditions in a DRS uniformly and has no means of distinguishing between conditions corresponding to tokens in a sentence (e.g., the predicate say(e1) refers to the verb said) and semantic relations (e.g., Cause(e1,x1)). Our second decoder attempts to take this into account by distinguishing conditions which are local and correspond to words in a sentence from items which are more global and express semantic content (see Figure 3(b)). Specifically, we model sentence specific conditions using a copying mechanism, and all other conditions G which do not correspond to sentential tokens (e.g., thematic roles, rhetorical relations) with an insertion mechanism. Each token in a sentence is assigned a copying score oji: oji = h⊤ djW3hei, where subscript ji denotes the ith token at jth time step, and W3 ∈Rddec×denc. All other conditions G are assigned an insertion score: sj = [hct j;eyj−1]∗W4 +b4, where W4 ∈R(denc+dy)×|G|, b4 ∈R|G|, and hct j are the same with the baseline decoder. We obtain the probability distribution over output tokens as: pj = p(yj|Y j−1 1 ,Xn 1 ) = SOFTMAX([o j;sj]) 434 The state <SOS> DRS( state( (a) … . x1 DRS( state( x1 ) ) say( scoring component The state <SOS> … . state( say( state( DRS( x1 x1 ) e1 (c) say( e1 π1 SDRS( … e1 ) SDRS( k1 The state (b) … . Cause( say( ) e1 Cause( e1 e1 x1 x1 ) Topic( Topic( e1 DRS( π1( π1( SDRS( k1 Cont.( ) Cause( e1 x1 ) Cause( e1 x1 ) Topic( … <SOS> DRS( state( x1 DRS( state( x1 ) ) say( say( e1 p1 SDRS( … e1 ) SDRS( k1 ) Cause( e1 x1 ) Cause( e1 x1 ) Topic( … concat concat concat concat concat concat concat concat concat concat concat concat concat concat π1 ) e1 π1 ) k2 k1 k2 ) Cont.( Paral.( k1 k2 ) k1 k2 k1( SDRS( … … … (c.1) (c.2) (c.3) Figure 3: (a) baseline model; (b) shallow structure model; (c) deep structure model (scoring components are not displayed): (c.1) predicts DRS structure, (c.2) predicts conditions, and (c.3) predicts referents. Blue boxes are encoder hidden units, red boxes are decoder LSTM hidden units, green and yellow boxes represent copy and insertion scores, respectively. 4.4 Deep Structure Decoder As explained previously, our structure prediction problem is rather challenging: the length of a bracketed DRS is nearly five times longer than its corresponding sentence. As shown in Figure 1, a bracketed DRS, y1,y2,...,yn consists of three parts: internal structure ˆY = ˆy1, ˆy2,...ˆyt (e.g., DRS( π1( SDRS(k1(DRS(→(DRS( )DRS( ))) k2( DRS(→( DRS( ) DRS ( ) ) ) ) ) ) )), conditions ¯Y = ¯y1, ¯y2,..., ¯yr (e.g., statement, say, Topic), and referents ˙Y = ˙y1, ˙y2,..., ˙yv (e.g., x1, e1, π1), where t +r ∗2+v = n.2 Our third decoder (see Figure 3(c)) first predicts the structural make-up of the DRS, then the conditions, and finally their referents in an end-to-end framework. The probability distribution of structured output Y given natural language input X is rewritten as: p(Y|X) = p(ˆY, ¯Y, ˙Y|X) = ∏j p(ˆyj|ˆY j−1 1 ,X) ×∏j p(¯yj|¯Y j−1 1 , ˆY j′ 1 ,X) ×∏j p(˙yj|˙Y j−1 1 , ¯Y j′ 1 , ˆY j′′ 1 ,X) (11) where ˆY j−1 1 , ¯Y j−1 1 , and ˙Y j−1 1 denote the tree structure, conditions, and referents predicted so far. 2Each condition has one and only one right bracket. ˆY j′ 1 denotes the structure predicted before conditions ¯yj; ˆY j′′ 1 and ¯Y j′ 1 are the structures and conditions predicted before referents ˙yj. We next discuss how each decoder is modeled. Structure Prediction To model basic DRS structure we apply the shallow decoder discussed in Section 4.3 and also shown in Figure 3(c.1). Tokens in such structures correspond to parent nodes in a tree; in other words, they are all inserted from G, and subsequently predicted tokens are only scored with the insert score, i.e., ˆsi = si. The hidden units of the decoder are: ˆhdj = LSTM(eˆyj−1), And the probabilistic distribution over structure denoting tokens is: p(yj|Y j−1 1 ,X) = SOFTMAX(ˆsj) Condition Prediction DRS conditions are generated by taking previously predicted structures into account, e.g., when “DRS(” or “SDRS(” are predicted, their conditions will be generated next. By mapping j to (k,mk), the sequence of conditions can be rewritten as ¯y1,..., ¯yj,..., ¯yr = ¯y(1,1), ¯y(1,2),..., ¯y(k,mk),..., where ¯y(k,mk) is mkth 435 condition of structure token ˆyk. The corresponding hidden units ˆhdk act as conditional input to the decoder. Structure denoting tokens (e.g., “DRS(” or “SDRS(”) are fed into the decoder one by one to generate the corresponding conditions as: e¯y(k,0) = ˆhdk ∗W5 +b5, where W5 ∈Rddec×dy and b5 ∈Rdy. The hidden unit of the conditions decoder is computed as: ¯hd j = ¯hd(k,mk) = LSTM(e¯y(k,mk−1)), Given hidden unit ¯hd j, we obtain the copy score ¯oj and insert score ¯sj. The probabilistic distribution over conditions is: p(¯yj|¯Y j−1 1 , ˆY j′ 1 ,X) = SOFTMAX([ ¯oj; ¯sj]) Referent Prediction Referents are generated based on the structure and conditions of the DRS. Each condition has at least one referent. Similar to condition prediction, the sequence of referents can be rewritten as ˙y1,..., ˙yj,..., ˙yv = ˙y(1,1), ˙y(1,2),..., ˙y(k,mk),... The hidden units of the conditions decoder are fed into the referent decoder e˙y(k,0) = ¯hdk ∗W6 + b6, where W6 ∈Rddec×dy, b6 ∈Rdy. The hidden unit of the referent decoder is computed as: ˙hd j = ˙hd(k,mk) = LSTM(e˙y(k,mk−1)), All referents are inserted from G, given hidden unit ˙hd j (we only obtain the insert score ˙sj). The probabilistic distribution over predicates is: p(˙yj|˙Y j−1 1 , ¯Y j′ 1 , ˆY j′′ 1 ,X) = SOFTMAX(˙sj). Note that a single LSTM is adopted for structure, condition and referent prediction. The mathematic symbols are summarized in Table 2. 4.5 Training The models are trained to minimize a crossentropy loss objective with ℓ2 regularization: L(θ) = −∑ j log pj + λ 2||θ||2, where θ is the set of parameters, and λ is a regularization hyper-parameter (λ = 10−6). We used stochastic gradient descent with Adam (Kingma and Ba, 2014) to adjust the learning rate. 5 Experimental Setup Settings Our experiments were carried out on the GMB following the tree conversion process discussed in Section 3. We adopted the training, development, and testing partitions recommended in Bos et al. (2017). We compared the three models introduced in Section 4, namely the baseline sequence decoder, the shallow structured decoder and the deep structure decoder. We used the same empirical hyper-parameters for all three models. The dimensions of word and lemma embeddings were 64 and 32, respectively. The dimensions of hidden vectors were 256 for the encoder and 128 for the decoder. The encoder used two hidden layers, whereas the decoder only one. The dropout rate was 0.1. Pre-trained word embeddings (100 dimensions) were generated with Word2Vec trained on the AFP portion of the English Gigaword corpus.3 Evaluation Due to the complex nature of our structured prediction task, we cannot expect model output to exactly match the gold standard. For instance, the numbering of the referents may be different, but nevertheless valid, or the order of the children of a tree node (e.g., “DRS(india(x1) say(e1))” and “DRS(say(e1) india(x1))” are the same). We thus use F1 instead of exact match accuracy. Specifically, we report D-match4 a metric designed to evaluate scoped meaning representations and released as part of the distribution of the Parallel Meaning Bank corpus (Abzianidze et al., 2017). D-match is based on Smatch5, a metric used to evaluate AMR graphs (Cai and Knight, 2013); it calculates F1 on discourse representation graphs (DRGs), i.e., triples of nodes, arcs, and their referents, applying multiple restarts to obtain a good referent (node) mapping between graphs. We converted DRSs (predicted and goldstandard) into DRGs following the top-down procedure described in Algorithm 1.6 ISCONDITION returns true if the child is a condition (e.g., india(x1)), where three arcs are created, one is connected to a parent node and the other two are connected to arg1 and arg2, respectively (lines 7–12). ISQUANTIFIER returns true if the child is a quantifier (e.g., π1, ¬ and 2) and three arcs are created; one is connected to the parent node, one to the referent that is created if and only 3The models are trained on a single GPU without batches. 4https://github.com/RikVN/D-match 5https://github.com/snowblink14/smatch 6We refer the interested reader to the supplementary material for more details. 436 Algorithm 1 DRS to DRG Conversion Input: T, tree-like DRS Output: G, a set of edges 1: nb ←0; nc ←0; G ←Ø 2: stack ←[];R ←Ø 3: procedure TRAVELDRS(parent) 4: stack.append(bnb);nb ←nb +1 5: nodep ←stack.top 6: for child in parent do 7: if ISCONDITION(child) then 8: G ←G∪{nodep child.rel −−−−−→cnc} 9: G ←G∪{cnc arg1 −−→child.arg1} 10: G ←G∪{cnc arg2 −−→child.arg2} 11: nc ←nc +1 12: ADDREFERENT(nodep,child) 13: else if ISQUANTIFIER(child) then 14: G ←G∪{nodep child.class −−−−−−→cnc} 15: G ←G∪{cnc arg1 −−→child.arg1} 16: G ←G∪{cnc arg1 −−→bnb+1} 17: nc ←nc +1 18: if ISPROPSEG(child) then 19: ADDREFERENT(nodep,child) 20: end if 21: TRAVELDRS(child.nextDRS) 22: end if 23: end for 24: stack.pop() 25: end procedure 26: procedure ADDREFERENT(nodep,child) 27: if child.arg1 not in R then 28: G ←G∪{nodep ref −→child.arg1} 29: R ←R∪child.arg1 30: end if 31: if child.arg2 not in R then 32: G ←G∪{nodep ref −→child.arg2} 33: R ←R∪child.arg2 34: end if 35: end procedure 36: TRAVELDRS(T) 37: return G if the child is a proposition or segment (e.g., π1 and k1), and one is connected to the next DRS or SDRS nodes (lines 13–20). The algorithm will recursively travel all DRS or SDRS nodes (line 21). Furthermore, arcs are introduced to connect DRS or SDRS nodes to the referents that first appear in a condition (lines 26–35). When comparing two DRGs, we calculate the F1 over their arcs. For example consider the two DRGs (a) and (b) shown in Figure 4. Let {b0 : b0,x1 : x2,x2 : x3,c0 : c0,c1 : c2,c2 : c3} denote the node alignment between them. The number of matching arcs is eight, the number of arcs in the gold DRG is nine, and the number of arcs in the predicted DRG is 12. So recall is 8/9, precision is 8/12, and F1 is 76.19. b0 x1 x2 c0 c1 c2 b0 x2 x3 c0 c2 c3 x1 c1 (a) (b) Figure 4: (a) is the gold DRS and (b) is the predicted DRS (condition names are not shown). 6 Results Table 3 compares our three models on the development set. As can be seen, the shallow structured decoder performs better than the baseline decoder, and the proposed deep structure decoder outperforms both of them. Ablation experiments show that without pre-trained word embeddings or word lemma embeddings, the model generally performs worse. Compared to lemma embeddings, pretrained word embeddings contribute more. Table 4 shows our results on the test set. To assess the degree to which the various decoders contribute to DRS parsing, we report results when predicting the full DRS structure (second block), when ignoring referents (third block), and when ignoring both referents and conditions (fourth block). Overall, we observe that the shallow structure model improves precision over the baseline with a slight loss in recall, while the deep structure model performs best by a large margin. When referents are not taken into account (compare the second and third blocks in Table 4), performance improves across the board. When conditions are additionally omitted, we observe further performance gains. This is hardly surprising, since errors propagate from one stage to the next when predicting full DRS structures. Further analysis revealed that the parser performs slightly better on (copy) conditions which correspond to natural language tokens compared to (insert) conditions (e.g., Topic, Agent) which are generated from global semantic content (83.22 vs 80.63 F1). The parser is also better on sentences which do not represent SDRSs (79.12 vs 68.36 F1) which is expected given that they usually correspond to more elaborate structures. We also found that rhetorical relations (linking segments) are predicted fairly accurately, especially if they are frequently attested (e.g., Continuation, Parallel), while the parser has difficulty with relations denoting contrast. 437 Model P (%) R (%) F1 (%) baseline 51.35 63.85 56.92 shallow 67.88 63.53 65.63 deep 79.01 75.65 77.29 deep (–pre) 78.47 73.43 75.87 deep (–pre & lem) 78.21 72.82 75.42 Table 3: GMB development set. Model DRG DRG w/o refs DRG w/o refs & conds P R F1 P R F1 P R F1 baseline 52.21 64.46 57.69 47.20 58.93 52.42 52.89 71.80 60.91 shallow 66.61 63.92 65.24 66.05 62.93 64.45 83.30 62.91 71.68 deep 79.27 75.88 77.54 82.87 79.40 81.10 93.91 88.51 91.13 Table 4: GMB test set. 10 15 20 25 30 60 80 100 sentence length F1 (%) deep shallow baseline Figure 5: F1 score as a function of sentence length. Figure 5 shows F1 performance for the three parsers on sentences of different length. We observe a similar trend for all models: as sentence length increases, model performance decreases. The baseline and shallow models do not perform well on short sentences which despite containing fewer words, can still represent complex meaning which is challenging to capture sequentially. On the other hand, the performance of the deep model is relatively stable. LSTMs in this case function relatively well, as they are faced with the easier task of predicting meaning in different stages (starting with a tree skeleton which is progressively refined). We provide examples of model output in the supplementary material. 7 Related Work Tree-structured Decoding A few recent approaches develop structured decoders which make use of the syntax of meaning representations. Dong and Lapata (2016) and Alvarez-Melis and Jaakkola (2017) generate trees in a top-down fashion, while in other work (Xiao et al., 2016; Krishnamurthy et al., 2017) the decoder generates from a grammar that guarantees that predicted logical forms are well-typed. In a similar vein, Yin and Neubig (2017) generate abstract syntax trees (ASTs) based on the application of production rules defined by the grammar. Rabinovich et al. (2017) introduce a modular decoder whose various components are dynamically composed according to the generated tree structure. In comparison, our model does not use grammar information explicitly. We first decode the structure of the DRS, and then fill in details pertaining to its semantic content. Our model is not strictly speaking top-down, we generate partial trees sequentially, and then expand non-terminal nodes, ensuring that when we generate the children of a node, we have already obtained the structure of the entire tree. Wide-coverage Semantic Parsing Our model is trained on the GMB (Bos et al., 2017), a richly annotated resource in the style of DRT which provides a unique opportunity for bootstrapping wide-coverage semantic parsers. Boxer (Bos, 2008) was a precursor to the GMB, the first semantic parser of this kind, which deterministically maps CCG derivations onto formal meaning representations. Le and Zuidema (2012) were the first to train a semantic parser on an early release of the GMB (2,000 documents; Basile et al. 2012), however, they abandon lambda calculus in favor of a graph based representation. The latter is closely related to AMR, a general-purpose meaning representation language for broad-coverage text. In AMR the meaning of a sentence is represented as a rooted, directed, edge-labeled and leaf-labeled graph. AMRs do not resemble classical meaning representations and do not have a model-theoretic interpretation. However, see Bos (2016) and Artzi et al. (2015) for translations to first-order logic. 8 Conclusions We introduced a new end-to-end model for opendomain semantic parsing. Experimental results on the GMB show that our decoder is able to recover discourse representation structures to a good degree (77.54 F1), albeit with some simplifications. In the future, we plan to model document-level representations which are more in line with DRT and the GMB annotations. Acknowledgments We thank the anonymous reviewers for their feedback and Johan Bos for answering several questions relating to the GMB. We gratefully acknowledge the support of the European Research Council (Lapata, Liu; award number 681760) and the EU H2020 project SUMMA (Cohen, Liu; grant agreement 688139). 438 References Lasha Abzianidze, Johannes Bjerva, Kilian Evang, Hessel Haagsma, Rik van Noord, Pierre Ludmann, Duc-Duy Nguyen, and Johan Bos. 2017. The parallel meaning bank: Towards a multilingual corpus of translations annotated with compositional meaning representations. In Proceedings of the 15th Conference of the European Chapter of the Association for Computational Linguistics: Volume 2, Short Papers, pages 242–247, Valencia, Spain. David Alvarez-Melis and Tommi S. Jaakkola. 2017. Tree-structured decoding with doubly-recurrent neural networks. In Proceedings of the 5th International Conference on Learning Representation (ICLR), Toulon, France. Yoav Artzi, Kenton Lee, and Luke Zettlemoyer. 2015. Broad-coverage CCG semantic parsing with AMR. In Proceedings of the 2015 Conference on Empirical Methods in Natural Language Processing, pages 1699–1710, Lisbon, Portugal. Nicholas Asher and Alex Lascarides. 2003. Logics of conversation. Cambridge University Press. Dzmitry Bahdanau, Kyunghyun Cho, and Yoshua Bengio. 2015. Neural machine translation by jointly learning to align and translate. In Proceedings of the 4th International Conference on Learning Representations (ICLR), San Diego, California. Laura Banarescu, Claire Bonial, Shu Cai, Madalina Georgescu, Kira Griffitt, Ulf Hermjakob, Kevin Knight, Philipp Koehn, Martha Palmer, and Nathan Schneider. 2013. Abstract meaning representation for sembanking. In Proceedings of the 7th Linguistic Annotation Workshop and Interoperability with Discourse, pages 178–186, Sofia, Bulgaria. Valerio Basile, Johan Bos, Kilian Evang, and Noortje Venhuizen. 2012. Developing a large semantically annotated corpus. In Proceedings of the 8th International Conference on Language Resources and Evaluation (LREC’12), Istanbul, Turkey. Johan Bos. 2008. Wide-coverage semantic analysis with Boxer. In Proceedings of the 2008 Conference on Semantics in Text Processing, pages 277–286. Johan Bos. 2015. Open-domain semantic parsing with Boxer. In Proceedings of the 20th Nordic Conference of Computational Linguistics (NODALIDA 2015), pages 301–304. Link¨oping University Electronic Press, Sweden. Johan Bos. 2016. Expressive power of abstract meaning representations. Computational Linguistics, 42(3):527–535. Johan Bos, Valerio Basile, Kilian Evang, Noortje Venhuizen, and Johannes Bjerva. 2017. The groningen meaning bank. In Nancy Ide and James Pustejovsky, editors, Handbook of Linguistic Annotation, volume 2, pages 463–496. Springer. Jan Buys and Phil Blunsom. 2017. Robust incremental neural semantic graph parsing. In Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 1215–1226, Vancouver, Canada. Shu Cai and Kevin Knight. 2013. Smatch: an evaluation metric for semantic feature structures. In Proceedings of the 51st Annual Meeting of the Association for Computational Linguistics (Volume 2: Short Papers), pages 748–752, Sofia, Bulgaria. Jianpeng Cheng, Siva Reddy, Vijay Saraswat, and Mirella Lapata. 2017. Learning structured natural language representations for semantic parsing. In Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 44–55, Vancouver, Canada. Stephen Clark and James Curran. 2007. Widecoverage efficient statistical parsing with CCG and log-linear models. Computational Linguistics, 33(4):493–552. Ann Copestake, Dan Flickinger, Carl Pollar, and Ivan A. Sag. 2005. Minimal recursion semantics: An introduction. Research on Language and Computation, 2–3(3):281–332. Deborah A. Dahl, Madeleine Bates, Michael Brown, William Fisher, Kate Hunicke-Smith, Christine Pao David Pallett, Alexander Rudnicky, and Elizabeth Shriberg. 1994. Expanding the scope of the atis task: the atis-3 corpus. In Proceedings of the workshop on ARPA Human Language Technology, pages 43–48, Plainsboro, New Jersey. Li Dong and Mirella Lapata. 2016. Language to logical form with neural attention. In Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 33–43, Berlin, Germany. Sepp Hochreiter and J¨urgen Schmidhuber. 1997. Long short-term memory. Neural Computation, 9(8):1735–1780. Robin Jia and Percy Liang. 2016. Data recombination for neural semantic parsing. In Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 12–22, Berlin, Germany. Hans Kamp and Uwe Reyle. 1993. From discourse to logic; an introduction to modeltheoretic semantics of natural language, formal logic and DRT. Rohit J. Kate, Yuk Wah Wong, and Raymond J. Mooney. 2005. Learning to transform natural to formal languages. In Proceedings of the 20th National Conference on Artificial Intelligence, pages 1062– 1068, Pittsburgh, PA. Diederik P. Kingma and Jimmy Ba. 2014. Adam: A method for stochastic optimization. In Proceedings of the 3rd International Conference on Learning Representations (ICLR), Banff, Canada. 439 Karin Kipper, Anna Korhonen, Neville Ryant, and Martha Palmer. 2008. A large-scale classification of english verbs. Language Resources and Evaluation, 42(1):21–40. Tom´aˇs Koˇcisk`y, G´abor Melis, Edward Grefenstette, Chris Dyer, Wang Ling, Phil Blunsom, and Karl Moritz Hermann. 2016. Semantic parsing with semi-supervised sequential autoencoders. In Proceedings of the 2016 Conference on Empirical Methods in Natural Language Processing, pages 1078– 1087, Austin, Texas. Jayant Krishnamurthy, Pradeep Dasigi, and Matt Gardner. 2017. Neural semantic parsing with type constraints for semi-structured tables. In Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing, pages 1516–1526, Copenhagen, Denmark. Phong Le and Willem Zuidema. 2012. Learning compositional semantics for open domain semantic parsing. In Proceedings of the 24th International Conference on Computational Linguistics (COLING), pages 1535–1552, Mumbai, India. Percy Liang, Michael Jordan, and Dan Klein. 2011. Learning dependency-based compositional semantics. In Proceedings of the 49th Annual Meeting of the Association for Computational Linguistics: Human Language Technologies, pages 590–599, Portland, Oregon. Ella Rabinovich, Noam Ordan, and Shuly Wintner. 2017. Found in translation: Reconstructing phylogenetic language trees from translations. In Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 530–540, Vancouver, Canada. Mark Steedman. 2001. The Syntactic Process. The MIT Press. Ilya Sutskever, Oriol Vinyals, and Quoc V. Le. 2014. Sequence to sequence learning with neural networks. In Z. Ghahramani, M. Welling, C. Cortes, N. D. Lawrence, and K. Q. Weinberger, editors, Advances in Neural Information Processing Systems 27, pages 3104–3112. Curran Associates, Inc. Noortje J. Venhuizen, Johan Bos, and Harm Brouwer. 2013. Parsimonious semantic representations with projection pointers. In Proceedings of the 10th International Conference on Computational Semantics (IWCS 2013) – Long Papers, pages 252–263, Potsdam, Germany. Yushi Wang, Jonathan Berant, and Percy Liang. 2015. Building a semantic parser overnight. In Proceedings of the 53rd Annual Meeting of the Association for Computational Linguistics and the 7th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), pages 1332–1342, Beijing, China. Yuk Wah Wong and Raymond J. Mooney. 2006. Learning for semantic parsing with statistical machine translation. In Proceedings of the main conference on Human Language Technology Conference of the North American Chapter of the Association of Computational Linguistics, pages 439–446, New York City, USA. Chunyang Xiao, Marc Dymetman, and Claire Gardent. 2016. Sequence-based structured prediction for semantic parsing. In Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 1341– 1350, Berlin, Germany. Pengcheng Yin and Graham Neubig. 2017. A syntactic neural model for general-purpose code generation. In Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 440–450, Vancouver, Canada. John M. Zelle and Raymond J. Mooney. 1996. Learning to parse database queries using inductive logic programming. In Proceedings of the 13th National Conference on Artificial Intelligence, pages 1050– 1055, Portland, Oregon. Luke S. Zettlemoyer and Michael Collins. 2005. Learning to map sentences to logical form: Structured classification with probabilistic categorial grammars. In PProceedings of the 21st Conference in Uncertainty in Artificial Intelligence, pages 658– 666, Edinburgh, Scotland.
2018
40
Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Long Papers), pages 440–450 Melbourne, Australia, July 15 - 20, 2018. c⃝2018 Association for Computational Linguistics 440 Baseline Needs More Love: On Simple Word-Embedding-Based Models and Associated Pooling Mechanisms Dinghan Shen1, Guoyin Wang1, Wenlin Wang1, Martin Renqiang Min2 Qinliang Su3, Yizhe Zhang4, Chunyuan Li1, Ricardo Henao1, Lawrence Carin1 1 Duke University 2 NEC Laboratories America 3 Sun Yat-sen University 4 Microsoft Research [email protected] Abstract Many deep learning architectures have been proposed to model the compositionality in text sequences, requiring a substantial number of parameters and expensive computations. However, there has not been a rigorous evaluation regarding the added value of sophisticated compositional functions. In this paper, we conduct a point-by-point comparative study between Simple Word-Embeddingbased Models (SWEMs), consisting of parameter-free pooling operations, relative to word-embedding-based RNN/CNN models. Surprisingly, SWEMs exhibit comparable or even superior performance in the majority of cases considered. Based upon this understanding, we propose two additional pooling strategies over learned word embeddings: (i) a max-pooling operation for improved interpretability; and (ii) a hierarchical pooling operation, which preserves spatial (n-gram) information within text sequences. We present experiments on 17 datasets encompassing three tasks: (i) (long) document classification; (ii) text sequence matching; and (iii) short text tasks, including classification and tagging. 1 Introduction Word embeddings, learned from massive unstructured text data, are widely-adopted building blocks for Natural Language Processing (NLP). By representing each word as a fixed-length vector, these embeddings can group semantically similar words, while implicitly encoding rich linguistic regularities and patterns (Bengio et al., 2003; Mikolov et al., 2013; Pennington et al., 2014). Leveraging the word-embedding construct, many deep architectures have been proposed to model the compositionality in variable-length text sequences. These methods range from simple operations like addition (Mitchell and Lapata, 2010; Iyyer et al., 2015), to more sophisticated compositional functions such as Recurrent Neural Networks (RNNs) (Tai et al., 2015; Sutskever et al., 2014), Convolutional Neural Networks (CNNs) (Kalchbrenner et al., 2014; Kim, 2014; Zhang et al., 2017a) and Recursive Neural Networks (Socher et al., 2011a). Models with more expressive compositional functions, e.g., RNNs or CNNs, have demonstrated impressive results; however, they are typically computationally expensive, due to the need to estimate hundreds of thousands, if not millions, of parameters (Parikh et al., 2016). In contrast, models with simple compositional functions often compute a sentence or document embedding by simply adding, or averaging, over the word embedding of each sequence element obtained via, e.g., word2vec (Mikolov et al., 2013), or GloVe (Pennington et al., 2014). Generally, such a Simple Word-Embedding-based Model (SWEM) does not explicitly account for spatial, word-order information within a text sequence. However, they possess the desirable property of having significantly fewer parameters, enjoying much faster training, relative to RNN- or CNN-based models. Hence, there is a computation-vs.-expressiveness tradeoff regarding how to model the compositionality of a text sequence. In this paper, we conduct an extensive experimental investigation to understand when, and why, simple pooling strategies, operated over word embeddings alone, already carry sufficient information for natural language understanding. To account for the distinct nature of various NLP tasks that may require different semantic features, we 441 compare SWEM-based models with existing recurrent and convolutional networks in a pointby-point manner. Specifically, we consider 17 datasets, including three distinct NLP tasks: document classification (Yahoo news, Yelp reviews, etc.), natural language sequence matching (SNLI, WikiQA, etc.) and (short) sentence classification/tagging (Stanford sentiment treebank, TREC, etc.). Surprisingly, SWEMs exhibit comparable or even superior performance in the majority of cases considered. In order to validate our experimental findings, we conduct additional investigations to understand to what extent the word-order information is utilized/required to make predictions on different tasks. We observe that in text representation tasks, many words (e.g., stop words, or words that are not related to sentiment or topic) do not meaningfully contribute to the final predictions (e.g., sentiment label). Based upon this understanding, we propose to leverage a max-pooling operation directly over the word embedding matrix of a given sequence, to select its most salient features. This strategy is demonstrated to extract complementary features relative to the standard averaging operation, while resulting in a more interpretable model. Inspired by a case study on sentiment analysis tasks, we further propose a hierarchical pooling strategy to abstract and preserve the spatial information in the final representations. This strategy is demonstrated to exhibit comparable empirical results to LSTM and CNN on tasks that are sensitive to word-order features, while maintaining the favorable properties of not having compositional parameters, thus fast training. Our work presents a simple yet strong baseline for text representation learning that is widely ignored in benchmarks, and highlights the general computation-vs.-expressiveness tradeoff associated with appropriately selecting compositional functions for distinct NLP problems. Furthermore, we quantitatively show that the word-embeddingbased text classification tasks can have the similar level of difficulty regardless of the employed models, using the subspace training (Li et al., 2018) to constrain the trainable parameters. Thus, according to Occam’s razor, simple models are preferred. 2 Related Work A fundamental goal in NLP is to develop expressive, yet computationally efficient compositional functions that can capture the linguistic structure of natural language sequences. Recently, several studies have suggested that on certain NLP applications, much simpler word-embedding-based architectures exhibit comparable or even superior performance, compared with more-sophisticated models using recurrence or convolutions (Parikh et al., 2016; Vaswani et al., 2017). Although complex compositional functions are avoided in these models, additional modules, such as attention layers, are employed on top of the word embedding layer. As a result, the specific role that the word embedding plays in these models is not emphasized (or explicit), which distracts from understanding how important the word embeddings alone are to the observed superior performance. Moreover, several recent studies have shown empirically that the advantages of distinct compositional functions are highly dependent on the specific task (Mitchell and Lapata, 2010; Iyyer et al., 2015; Zhang et al., 2015a; Wieting et al., 2015; Arora et al., 2016). Therefore, it is of interest to study the practical value of the additional expressiveness, on a wide variety of NLP problems. SWEMs bear close resemblance to Deep Averaging Network (DAN) (Iyyer et al., 2015) or fastText (Joulin et al., 2016), where they show that average pooling achieves promising results on certain NLP tasks. However, there exist several key differences that make our work unique. First, we explore a series of pooling operations, rather than only average-pooling. Specifically, a hierarchical pooling operation is introduced to incorporate spatial information, which demonstrates superior results on sentiment analysis, relative to average pooling. Second, our work not only explores when simple pooling operations are enough, but also investigates the underlying reasons, i.e., what semantic features are required for distinct NLP problems. Third, DAN and fastText only focused on one or two problems at a time, thus a comprehensive study regarding the effectiveness of various compositional functions on distinct NLP tasks, e.g., categorizing short sentence/long documents, matching natural language sentences, has heretofore been absent. In response, our work seeks to perform a comprehensive comparison with respect to simple-vs.-complex compositional functions, across a wide range of NLP problems, and reveals some general rules for rationally selecting models to tackle different tasks. 442 3 Models & training Consider a text sequence represented as X (either a sentence or a document), composed of a sequence of words: {w1, w2, ...., wL}, where L is the number of tokens, i.e., the sentence/document length. Let {v1, v2, ...., vL} denote the respective word embeddings for each token, where vl 2 RK. The compositional function, X ! z, aims to combine word embeddings into a fixed-length sentence/document representation z. These representations are then used to make predictions about sequence X. Below, we describe different types of functions considered in this work. 3.1 Recurrent Sequence Encoder A widely adopted compositional function is defined in a recurrent manner: the model successively takes word vector vt at position t, along with the hidden unit ht−1 from the last position t −1, to update the current hidden unit via ht = f(vt, ht−1), where f(·) is the transition function. To address the issue of learning long-term dependencies, f(·) is often defined as Long ShortTerm Memory (LSTM) (Hochreiter and Schmidhuber, 1997), which employs gates to control the flow of information abstracted from a sequence. We omit the details of the LSTM and refer the interested readers to the work by Graves et al. (2013) for further explanation. Intuitively, the LSTM encodes a text sequence considering its word-order information, but yields additional compositional parameters that must be learned. 3.2 Convolutional Sequence Encoder The Convolutional Neural Network (CNN) architecture (Kim, 2014; Collobert et al., 2011; Gan et al., 2017; Zhang et al., 2017b; Shen et al., 2018) is another strategy extensively employed as the compositional function to encode text sequences. The convolution operation considers windows of n consecutive words within the sequence, where a set of filters (to be learned) are applied to these word windows to generate corresponding feature maps. Subsequently, an aggregation operation (such as max-pooling) is used on top of the feature maps to abstract the most salient semantic features, resulting in the final representation. For most experiments, we consider a singlelayer CNN text model. However, Deep CNN text models have also been developed (Conneau et al., 2016), and are considered in a few of our experiments. 3.3 Simple Word-Embedding Model (SWEM) To investigate the raw modeling capacity of word embeddings, we consider a class of models with no additional compositional parameters to encode natural language sequences, termed SWEMs. Among them, the simplest strategy is to compute the element-wise average over word vectors for a given sequence (Wieting et al., 2015; Adi et al., 2016): z = 1 L L X i=1 vi . (1) The model in (1) can be seen as an average pooling operation, which takes the mean over each of the K dimensions for all word embeddings, resulting in a representation z with the same dimension as the embedding itself, termed here SWEM-aver. Intuitively, z takes the information of every sequence element into account via the addition operation. Max Pooling Motivated by the observation that, in general, only a small number of key words contribute to final predictions, we propose another SWEM variant, that extracts the most salient features from every word-embedding dimension, by taking the maximum value along each dimension of the word vectors. This strategy is similar to the max-over-time pooling operation in convolutional neural networks (Collobert et al., 2011): z = Max-pooling(v1, v2, ..., vL) . (2) We denote this model variant as SWEM-max. Here the j-th component of z is the maximum element in the set {v1j, . . . , vLj}, where v1j is, for example, the j-th component of v1. With this pooling operation, those words that are unimportant or unrelated to the corresponding tasks will be ignored in the encoding process (as the components of the embedding vectors will have small amplitude), unlike SWEM-aver where every word contributes equally to the representation. Considering that SWEM-aver and SWEM-max are complementary, in the sense of accounting for different types of information from text sequences, we also propose a third SWEM variant, where the two abstracted features are concatenated together to form the sentence embeddings, denoted here as SWEM-concat. For all SWEM variants, there are no additional compositional parameters to be 443 Model Parameters Complexity Sequential Ops CNN n · K · d O(n · L · K · d) O(1) LSTM 4 · d · (K + d) O(L · d2 + L · K · d) O(L) SWEM 0 O(L · K) O(1) Table 1: Comparisons of CNN, LSTM and SWEM architectures. Columns correspond to the number of compositional parameters, computational complexity and sequential operations, respectively. learned. As a result, the models only exploit intrinsic word embedding information for predictions. Hierarchical Pooling Both SWEM-aver and SWEM-max do not take word-order or spatial information into consideration, which could be useful for certain NLP applications. So motivated, we further propose a hierarchical pooling layer. Let vi:i+n−1 refer to the local window consisting of n consecutive words words, vi, vi+1, ..., vi+n−1. First, an average-pooling is performed on each local window, vi:i+n−1. The extracted features from all windows are further down-sampled with a global max-pooling operation on top of the representations for every window. We call this approach SWEM-hier due to its layered pooling. This strategy preserves the local spatial information of a text sequence in the sense that it keeps track of how the sentence/document is constructed from individual word windows, i.e., n-grams. This formulation is related to bag-of-n-grams method (Zhang et al., 2015b). However, SWEM-hier learns fixed-length representations for the n-grams that appear in the corpus, rather than just capturing their occurrences via count features, which may potentially advantageous for prediction purposes. 3.4 Parameters & Computation Comparison We compare CNN, LSTM and SWEM wrt their parameters and computational speed. K denotes the dimension of word embeddings, as above. For the CNN, we use n to denote the filter width (assumed constant for all filters, for simplicity of analysis, but in practice variable n is commonly used). We define d as the dimension of the final sequence representation. Specifically, d represents the dimension of hidden units or the number of filters in LSTM or CNN, respectively. We first examine the number of compositional parameters for each model. As shown in Table 1, both the CNN and LSTM have a large number of parameters, to model the semantic compositionality of text sequences, whereas SWEM has no such parameters. Similar to Vaswani et al. (2017), we then consider the computational complexity and the minimum number of sequential operations required for each model. SWEM tends to be more efficient than CNN and LSTM in terms of computation complexity. For example, considering the case where K = d, SWEM is faster than CNN or LSTM by a factor of nd or d, respectively. Further, the computations in SWEM are highly parallelizable, unlike LSTM that requires O(L) sequential steps. 4 Experiments We evaluate different compositional functions on a wide variety of supervised tasks, including document categorization, text sequence matching (given a sentence pair, X1, X2, predict their relationship, y) as well as (short) sentence classification. We experiment on 17 datasets concerning natural language understanding, with corresponding data statistics summarized in the Supplementary Material. Our code will be released to encourage future research. We use GloVe word embeddings with K = 300 (Pennington et al., 2014) as initialization for all our models. Out-Of-Vocabulary (OOV) words are initialized from a uniform distribution with range [−0.01, 0.01]. The GloVe embeddings are employed in two ways to learn refined word embeddings: (i) directly updating each word embedding during training; and (ii) training a 300dimensional Multilayer Perceptron (MLP) layer with ReLU activation, with GloVe embeddings as input to the MLP and with output defining the refined word embeddings. The latter approach corresponds to learning an MLP model that adapts GloVe embeddings to the dataset and task of interest. The advantages of these two methods differ from dataset to dataset. We choose the better strategy based on their corresponding performances on the validation set. The final classifier is implemented as an MLP layer with dimension selected from the set [100, 300, 500, 1000], followed by a sigmoid or softmax function, depending on the specific task. Adam (Kingma and Ba, 2014) is used to optimize all models, with learning rate selected from the set [1 ⇥10−3, 3 ⇥10−4, 2 ⇥10−4, 1 ⇥10−5] (with cross-validation used to select the appropriate parameter for a given dataset and task). Dropout regularization (Srivastava et al., 2014) is 444 Model Yahoo! Ans. AG News Yelp P. Yelp F. DBpedia Bag-of-means⇤ 60.55 83.09 87.33 53.54 90.45 Small word CNN⇤ 69.98 89.13 94.46 58.59 98.15 Large word CNN⇤ 70.94 91.45 95.11 59.48 98.28 LSTM⇤ 70.84 86.06 94.74 58.17 98.55 Deep CNN (29 layer)† 73.43 91.27 95.72 64.26 98.71 fastText ‡ 72.0 91.5 93.8 60.4 98.1 fastText (bigram)‡ 72.3 92.5 95.7 63.9 98.6 SWEM-aver 73.14 91.71 93.59 60.66 98.42 SWEM-max 72.66 91.79 93.25 59.63 98.24 SWEM-concat 73.53 92.66 93.76 61.11 98.57 SWEM-hier 73.48 92.48 95.81 63.79 98.54 Table 2: Test accuracy on (long) document classification tasks, in percentage. Results marked with ⇤are reported in Zhang et al. (2015b), with † are reported in Conneau et al. (2016), and with ‡ are reported in Joulin et al. (2016). Politics Science Computer Sports Chemistry Finance Geoscience philipdru coulomb system32 billups sio2 (SiO2) proprietorship fossil justices differentiable cobol midfield nonmetal ameritrade zoos impeached paranormal agp sportblogs pka retailing farming impeachment converge dhcp mickelson chemistry mlm volcanic neocons antimatter win98 juventus quarks budgeting ecosystem Table 3: Top five words with the largest values in a given word-embedding dimension (each column corresponds to a dimension). The first row shows the (manually assigned) topic for words in each column. employed on the word embedding layer and final MLP layer, with dropout rate selected from the set [0.2, 0.5, 0.7]. The batch size is selected from [2, 8, 32, 128, 512]. 4.1 Document Categorization We begin with the task of categorizing documents (with approximately 100 words in average per document). We follow the data split in Zhang et al. (2015b) for comparability. These datasets can be generally categorized into three types: topic categorization (represented by Yahoo! Answer and AG news), sentiment analysis (represented by Yelp Polarity and Yelp Full) and ontology classification (represented by DBpedia). Results are shown in Table 2. Surprisingly, on topic prediction tasks, our SWEM model exhibits stronger performances, relative to both LSTM and CNN compositional architectures, this by leveraging both the average and max-pooling features from word embeddings. Specifically, our SWEM-concat model even outperforms a 29-layer deep CNN model (Conneau et al., 2016), when predicting topics. On the ontology classification problem (DBpedia dataset), we observe the same trend, that SWEM exhibits comparable or even superior results, relative to CNN or LSTM models. Since there are no compositional parameters in SWEM, our models have an order of magnitude fewer parameters (excluding embeddings) than LSTM or CNN, and are considerably more computationally efficient. As illustrated in Table 4, SWEM-concat achieves better results on Yahoo! Answer than CNN/LSTM, with only 61K parameters (one-tenth the number of LSTM parameters, or one-third the number of CNN parameters), while taking a fraction of the training time relative to the CNN or LSTM. Model Parameters Speed CNN 541K 171s LSTM 1.8M 598s SWEM 61K 63s Table 4: Speed & Parameters on Yahoo! Answer dataset. Interestingly, for the sentiment analysis tasks, both CNN and LSTM compositional functions perform better than SWEM, suggesting that wordorder information may be required for analyzing sentiment orientations. This finding is consistent with Pang et al. (2002), where they hypothesize that the positional information of a word in text sequences may be beneficial to predict sentiment. This is intuitively reasonable since, for instance, the phrase “not really good” and “really not good” convey different levels of negative sentiment, while being different only by their word orderings. Contrary to SWEM, CNN and 445 LSTM models can both capture this type of information via convolutional filters or recurrent transition functions. However, as suggested above, such word-order patterns may be much less useful for predicting the topic of a document. This may be attributed to the fact that word embeddings alone already provide sufficient topic information of a document, at least when the text sequences considered are relatively long. 4.1.1 Interpreting model predictions Although the proposed SWEM-max variant generally performs a slightly worse than SWEM-aver, it extracts complementary features from SWEMaver, and hence in most cases SWEM-concat exhibits the best performance among all SWEM variants. More importantly, we found that the word embeddings learned from SWEM-max tend to be sparse. We trained our SWEM-max model on the Yahoo datasets (randomly initialized). With the learned embeddings, we plot the values for each of the word embedding dimensions, for the entire vocabulary. As shown in Figure 1, most of the values are highly concentrated around zero, indicating that the word embeddings learned are very sparse. On the contrary, the GloVe word embeddings, for the same vocabulary, are considerably denser than the embeddings learned from SWEM-max. This suggests that the model may only depend on a few key words, among the entire vocabulary, for predictions (since most words do not contribute to the max-pooling operation in SWEM-max). Through the embedding, the model learns the important words for a given task (those words with non-zero embedding components). Figure 1: Histograms for learned word embeddings (randomly initialized) of SWEM-max and GloVe embeddings for the same vocabulary, trained on the Yahoo! Answer dataset. In this regard, the nature of max-pooling process gives rise to a more interpretable model. For a document, only the word with largest value in each embedding dimension is employed for the final representation. Thus, we suspect that semantically similar words may have large values in some shared dimensions. So motivated, after training the SWEM-max model on the Yahoo dataset, we selected five words with the largest values, among the entire vocabulary, for each word embedding dimension (these words are selected preferentially in the corresponding dimension, by the max operation). As shown in Table 3, the words chosen wrt each embedding dimension are indeed highly relevant and correspond to a common topic (the topics are inferred from words). For example, the words in the first column of Table 3 are all political terms, which could be assigned to the Politics & Government topic. Note that our model can even learn locally interpretable structure that is not explicitly indicated by the label information. For instance, all words in the fifth column are Chemistry-related. However, we do not have a chemistry label in the dataset, and regardless they should belong to the Science topic. 4.2 Text Sequence Matching To gain a deeper understanding regarding the modeling capacity of word embeddings, we further investigate the problem of sentence matching, including natural language inference, answer sentence selection and paraphrase identification. The corresponding performance metrics are shown in Table 5. Surprisingly, on most of the datasets considered (except WikiQA), SWEM demonstrates the best results compared with those with CNN or the LSTM encoder. Notably, on SNLI dataset, we observe that SWEM-max performs the best among all SWEM variants, consistent with the findings in Nie and Bansal (2017); Conneau et al. (2017), that max-pooling over BiLSTM hidden units outperforms average pooling operation on SNLI dataset. As a result, with only 120K parameters, our SWEM-max achieves a test accuracy of 83.8%, which is very competitive among state-ofthe-art sentence encoding-based models (in terms of both performance and number of parameters)1. The strong results of the SWEM approach on these tasks may stem from the fact that when matching natural language sentences, it is sufficient in most cases to simply model the word-level 1See leaderboard at https://nlp.stanford.edu/ projects/snli/ for details. 446 MultiNLI Model SNLI Matched Mismatched WikiQA Quora MSRP Acc. Acc. Acc. MAP MRR Acc. Acc. F1 CNN 82.1 65.0 65.3 0.6752 0.6890 79.60 69.9 80.9 LSTM 80.6 66.9⇤ 66.9⇤ 0.6820 0.6988 82.58 70.6 80.5 SWEM-aver 82.3 66.5 66.2 0.6808 0.6922 82.68 71.0 81.1 SWEM-max 83.8 68.2 67.7 0.6613 0.6717 82.20 70.6 80.8 SWEM-concat 83.3 67.9 67.6 0.6788 0.6908 83.03 71.5 81.3 Table 5: Performance of different models on matching natural language sentences. Results with ⇤are for Bidirectional LSTM, reported in Williams et al. (2017). Our reported results on MultiNLI are only trained MultiNLI training set (without training data from SNLI). For MSRP dataset, we follow the setup in Hu et al. (2014) and do not use any additional features. alignments between two sequences (Parikh et al., 2016). From this perspective, word-order information becomes much less useful for predicting relationship between sentences. Moreover, considering the simpler model architecture of SWEM, they could be much easier to be optimized than LSTM or CNN-based models, and thus give rise to better empirical results. 4.2.1 Importance of word-order information One possible disadvantage of SWEM is that it ignores the word-order information within a text sequence, which could be potentially captured by CNN- or LSTM-based models. However, we empirically found that except for sentiment analysis, SWEM exhibits similar or even superior performance as the CNN or LSTM on a variety of tasks. In this regard, one natural question would be: how important are word-order features for these tasks? To this end, we randomly shuffle the words for every sentence in the training set, while keeping the original word order for samples in the test set. The motivation here is to remove the word-order features from the training set and examine how sensitive the performance on different tasks are to word-order information. We use LSTM as the model for this purpose since it can captures wordorder information from the original training set. Datasets Yahoo Yelp P. SNLI Original 72.78 95.11 78.02 Shuffled 72.89 93.49 77.68 Table 6: Test accuracy for LSTM model trained on original/shuffled training set. The results on three distinct tasks are shown in Table 6. Somewhat surprisingly, for Yahoo and SNLI datasets, the LSTM model trained on shuffled training set shows comparable accuracies to those trained on the original dataset, indicating Negative: Friendly staff and nice selection of vegetarian options. Food is just okay, not great. Makes me wonder why everyone likes food fight so much. Positive: The store is small, but it carries specialties that are difficult to find in Pittsburgh. I was particularly excited to find middle eastern chili sauce and chocolate covered turkish delights. Table 7: Test samples from Yelp Polarity dataset for which LSTM gives wrong predictions with shuffled training data, but predicts correctly with the original training set. that word-order information does not contribute significantly on these two problems, i.e., topic categorization and textual entailment. However, on the Yelp polarity dataset, the results drop noticeably, further suggesting that word-order does matter for sentiment analysis (as indicated above from a different perspective). Notably, the performance of LSTM on the Yelp dataset with a shuffled training set is very close to our results with SWEM, indicating that the main difference between LSTM and SWEM may be due to the ability of the former to capture word-order features. Both observations are in consistent with our experimental results in the previous section. Case Study To understand what type of sentences are sensitive to word-order information, we further show those samples that are wrongly predicted because of the shuffling of training data in Table 7. Taking the first sentence as an example, several words in the review are generally positive, i.e. friendly, nice, okay, great and likes. However, the most vital features for predicting the sentiment of this sentence could be the phrase/sentence ‘is just okay’, ‘not great’ or ‘makes me wonder why everyone likes’, which cannot be captured without 447 Model MR SST-1 SST-2 Subj TREC RAE (Socher et al., 2011b) 77.7 43.2 82.4 – – MV-RNN (Socher et al., 2012) 79.0 44.4 82.9 – – LSTM (Tai et al., 2015) – 46.4 84.9 – – RNN (Zhao et al., 2015) 77.2 – – 93.7 90.2 Constituency Tree-LSTM (Tai et al., 2015) 51.0 88.0 Dynamic CNN (Kalchbrenner et al., 2014) – 48.5 86.8 – 93.0 CNN (Kim, 2014) 81.5 48.0 88.1 93.4 93.6 DAN-ROOT (Iyyer et al., 2015) 46.9 85.7 SWEM-aver 77.6 45.2 83.9 92.5 92.2 SWEM-max 76.9 44.1 83.6 91.2 89.0 SWEM-concat 78.2 46.1 84.3 93.0 91.8 Table 8: Test accuracies with different compositional functions on (short) sentence classifications. considering word-order features. It is worth noting the hints for predictions in this case are actually ngram phrases from the input document. 4.3 SWEM-hier for sentiment analysis As demonstrated in Section 4.2.1, word-order information plays a vital role for sentiment analysis tasks. However, according to the case study above, the most important features for sentiment prediction may be some key n-gram phrase/words from the input document. We hypothesize that incorporating information about the local word-order, i.e., n-gram features, is likely to largely mitigate the limitations of the above three SWEM variants. Inspired by this observation, we propose using another simple pooling operation termed as hierarchical (SWEM-hier), as detailed in Section 3.3. We evaluate this method on the two documentlevel sentiment analysis tasks and the results are shown in the last row of Table 2. SWEM-hier greatly outperforms the other three SWEM variants, and the corresponding accuracies are comparable to the results of CNN or LSTM (Table 2). This indicates that the proposed hierarchical pooling operation manages to abstract spatial (word-order) information from the input sequence, which is beneficial for performance in sentiment analysis tasks. 4.4 Short Sentence Processing We now consider sentence-classification tasks (with approximately 20 words on average). We experiment on three sentiment classification datasets, i.e., MR, SST-1, SST-2, as well as subjectivity classification (Subj) and question classification (TREC). The corresponding results are shown in Table 8. Compared with CNN/LSTM compositional functions, SWEM yields inferior accuracies on sentiment analysis datasets, consistent with our observation in the case of document categorization. However, SWEM exhibits comparable performance on the other two tasks, again with much less parameters and faster training. Further, we investigate two sequence tagging tasks: the standard CoNLL2000 chunking and CoNLL2003 NER datasets. Results are shown in the Supplementary Material, where LSTM and CNN again perform better than SWEMs. Generally, SWEM is less effective at extracting representations from short sentences than from long documents. This may be due to the fact that for a shorter text sequence, word-order features tend to be more important since the semantic information provided by word embeddings alone is relatively limited. Moreover, we note that the results on these relatively small datasets are highly sensitive to model regularization techniques due to the overfitting issues. In this regard, one interesting future direction may be to develop specific regularization strategies for the SWEM framework, and thus make them work better on small sentence classification datasets. 5 Discussion 5.1 Comparison via subspace training We use subspace training (Li et al., 2018) to measure the model complexity in text classification problems. It constrains the optimization of the trainable parameters in a subspace of low dimension d, the intrinsic dimension dint defines the minimum d that yield a good solution. Two models are studied: the SWEM-max variant, and the CNN model including a convolutional layer followed by a FC layer. We consider two settings: (1) The word embeddings are randomly intialized, and optimized jointly with the model parameters. We show the performance of direct and subspace training on AG News dataset in Figure 2 (a)(b). The two models trained via direct method share almost identical perfomrnace on training and 448 0 2 4 6 8 10 Subspace dim d 0.900 0.925 0.950 0.975 1.000 Accuracy SWEM CNN SWEM direct CNN direct 0 2 4 6 8 10 Subspace dim d 0.88 0.89 0.90 0.91 0.92 Accuracy SWEM CNN SWEM direct CNN direct (a) Training on AG News (b) Testing on AG News 0 200 400 600 800 1000 Subspace dim d 0.5 0.6 0.7 0.8 0.9 Accuracy SWEM CNN SWEM direct CNN direct 0 200 400 600 800 1000 Subspace dim d 0.5 0.6 0.7 0.8 0.9 Accuracy SWEM CNN SWEM direct CNN direct (c) Testing on AG News (d)Testing on Yelp P. Figure 2: Performance of subspace training. Word embeddings are optimized in (a)(b), and frozen in (c)(d). testing. The subspace training yields similar accuracy with direct training for very small d, even when model parameters are not trained at all (d = 0). This is because the word embeddings have the full degrees of freedom to adjust to achieve good solutions, regardless of the employed models. SWEM seems to have an easier loss landspace than CNN for word embeddings to find the best solutions. According to Occam’s razor, simple models are preferred, if all else are the same. (2) The pre-trained GloVe are frozen for the word embeddings, and only the model parameters are optimized. The results on testing datasets of AG News and Yelp P. are shown in Figure 2 (c)(d), respectively. SWEM shows significantly higher accuracy than CNN for a large range of low subspace dimension, indicating that SWEM is more parameter-efficient to get a decent solution. In Figure 2(c), if we set the performance threshold as 80% testing accuracy, SWEM exhibits a lower dint than CNN on AG News dataset. However, in Figure 2(d), CNN can leverage more trainable parameters to achieve higher accuracy when d is large. 5.2 Linear classifiers To further investigate the quality of representations learned from SWEMs, we employ a linear classifier on top of the representations for prediction, instead of a non-linear MLP layer as in the previous section. It turned out that utilizing a linear classifier only leads to a very small performance drop for both Yahoo! Ans. (from 73.53% to 73.18%) and Yelp P. datasets (from 93.76% to 93.66%) . This observation highlights that SWEMs are able to extract robust and informative sentence representations despite their simplicity. 5.3 Extension to other languages We have also tried our SWEM-concat and SWEMhier models on Sogou news corpus (with the same experimental setup as (Zhang et al., 2015b)), which is a Chinese dataset represented by Pinyin (a phonetic romanization of Chinese). SWEMconcat yields an accuracy of 91.3%, while SWEM-hier (with a local window size of 5) obtains an accuracy of 96.2% on the test set. Notably, the performance of SWEM-hier is comparable to the best accuracies of CNN (95.6%) and LSTM (95.2%), as reported in (Zhang et al., 2015b). This indicates that hierarchical pooling is more suitable than average/max pooling for Chinese text classification, by taking spatial information into account. It also implies that Chinese is more sensitive to local word-order features than English. 6 Conclusions We have performed a comparative study between SWEM (with parameter-free pooling operations) and CNN or LSTM-based models, to represent text sequences on 17 NLP datasets. We further validated our experimental findings through additional exploration, and revealed some general rules for rationally selecting compositional functions for distinct problems. Our findings regarding when (and why) simple pooling operations are enough for text sequence representations are summarized as follows: • Simple pooling operations are surprisingly effective at representing longer documents (with hundreds of words), while recurrent/convolutional compositional functions are most effective when constructing representations for short sentences. • Sentiment analysis tasks are more sensitive to word-order features than topic categorization tasks. However, a simple hierarchical pooling layer proposed here achieves comparable results to LSTM/CNN on sentiment analysis tasks. • To match natural language sentences, e.g., textual entailment, answer sentence selection, etc., simple pooling operations already exhibit similar or even superior results, compared to CNN and LSTM. • In SWEM with max-pooling operation, each individual dimension of the word embeddings contains interpretable semantic patterns, and groups together words with a common theme or topic. 449 References Yossi Adi, Einat Kermany, Yonatan Belinkov, Ofer Lavi, and Yoav Goldberg. 2016. Fine-grained analysis of sentence embeddings using auxiliary prediction tasks. ICLR. Sanjeev Arora, Yingyu Liang, and Tengyu Ma. 2016. A simple but tough-to-beat baseline for sentence embeddings. In ICLR. Yoshua Bengio, R´ejean Ducharme, Pascal Vincent, and Christian Jauvin. 2003. A neural probabilistic language model. JMLR, 3(Feb):1137–1155. Ronan Collobert, Jason Weston, L´eon Bottou, Michael Karlen, Koray Kavukcuoglu, and Pavel Kuksa. 2011. Natural language processing (almost) from scratch. JMLR, 12(Aug):2493–2537. Alexis Conneau, Douwe Kiela, Holger Schwenk, Loic Barrault, and Antoine Bordes. 2017. Supervised learning of universal sentence representations from natural language inference data. EMNLP. Alexis Conneau, Holger Schwenk, Lo¨ıc Barrault, and Yann Lecun. 2016. Very deep convolutional networks for natural language processing. arXiv preprint arXiv:1606.01781. Zhe Gan, Yunchen Pu, Ricardo Henao, Chunyuan Li, Xiaodong He, and Lawrence Carin. 2017. Learning generic sentence representations using convolutional neural networks. In EMNLP, pages 2380–2390. Alex Graves, Navdeep Jaitly, and Abdel-rahman Mohamed. 2013. Hybrid speech recognition with deep bidirectional lstm. In Automatic Speech Recognition and Understanding (ASRU), 2013 IEEE Workshop on, pages 273–278. IEEE. Sepp Hochreiter and J¨urgen Schmidhuber. 1997. Long short-term memory. Neural computation, 9(8):1735–1780. Baotian Hu, Zhengdong Lu, Hang Li, and Qingcai Chen. 2014. Convolutional neural network architectures for matching natural language sentences. In NIPS, pages 2042–2050. Mohit Iyyer, Varun Manjunatha, Jordan Boyd-Graber, and Hal Daum´e III. 2015. Deep unordered composition rivals syntactic methods for text classification. In ACL, volume 1, pages 16 81–1691. Armand Joulin, Edouard Grave, Piotr Bojanowski, and Tomas Mikolov. 2016. Bag of tricks for efficient text classification. arXiv preprint arXiv:1607.01759. Nal Kalchbrenner, Edward Grefenstette, and Phil Blunsom. 2014. A convolutional neural network for modelling sentences. arXiv preprint arXiv:1404.2188. Yoon Kim. 2014. Convolutional neural networks for sentence classification. EMNLP. Diederik Kingma and Jimmy Ba. 2014. Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980. Chunyuan Li, Heerad Farkhoor, Rosanne Liu, and Jason Yosinski. 2018. Measuring the intrinsic dimension of objective landscapes. In International Conference on Learning Representations. Tomas Mikolov, Ilya Sutskever, Kai Chen, Greg S Corrado, and Jeff Dean. 2013. Distributed representations of words and phrases and their compositionality. In NIPS, pages 3111–3119. Jeff Mitchell and Mirella Lapata. 2010. Composition in distributional models of semantics. Cognitive science, 34(8):1388–1429. Yixin Nie and Mohit Bansal. 2017. Shortcutstacked sentence encoders for multi-domain inference. arXiv preprint arXiv:1708.02312. Bo Pang, Lillian Lee, and Shivakumar Vaithyanathan. 2002. Thumbs up?: sentiment classification using machine learning techniques. In EMNLP, pages 79– 86. ACL. Ankur P Parikh, Oscar T¨ackstr¨om, Dipanjan Das, and Jakob Uszkoreit. 2016. A decomposable attention model for natural language inference. EMNLP. Jeffrey Pennington, Richard Socher, and Christopher Manning. 2014. Glove: Global vectors for word representation. In EMNLP, pages 1532–1543. Dinghan Shen, Martin Renqiang Min, Yitong Li, and Lawrence Carin. 2017. Adaptive convolutional filter generation for natural language understanding. arXiv preprint arXiv:1709.08294. Dinghan Shen, Yizhe Zhang, Ricardo Henao, Qinliang Su, and Lawrence Carin. 2018. Deconvolutional latent-variable model for text sequence matching. AAAI. Richard Socher, Brody Huval, Christopher D Manning, and Andrew Y Ng. 2012. Semantic compositionality through recursive matrix-vector spaces. In EMNLP, pages 1201–1211. Association for Computational Linguistics. Richard Socher, Cliff C Lin, Chris Manning, and Andrew Y Ng. 2011a. Parsing natural scenes and natural language with recursive neural networks. In ICML, pages 129–136. Richard Socher, Jeffrey Pennington, Eric H Huang, Andrew Y Ng, and Christopher D Manning. 2011b. Semi-supervised recursive autoencoders for predicting sentiment distributions. In EMNLP, pages 151– 161. Association for Computational Linguistics. Nitish Srivastava, Geoffrey E Hinton, Alex Krizhevsky, Ilya Sutskever, and Ruslan Salakhutdinov. 2014. Dropout: a simple way to prevent neural networks from overfitting. JMLR, 15(1):1929–1958. 450 Ilya Sutskever, Oriol Vinyals, and Quoc V Le. 2014. Sequence to sequence learning with neural networks. In NIPS, pages 3104–3112. Kai Sheng Tai, Richard Socher, and Christopher D Manning. 2015. Improved semantic representations from tree-structured long short-term memory networks. arXiv preprint arXiv:1503.00075. Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, Lukasz Kaiser, and Illia Polosukhin. 2017. Attention is all you need. NIPS. John Wieting, Mohit Bansal, Kevin Gimpel, and Karen Livescu. 2015. Towards universal paraphrastic sentence embeddings. ICLR. Adina Williams, Nikita Nangia, and Samuel R Bowman. 2017. A broad-coverage challenge corpus for sentence understanding through inference. arXiv preprint arXiv:1704.05426. Shiliang Zhang, Hui Jiang, Mingbin Xu, Junfeng Hou, and Lirong Dai. 2015a. The fixed-size ordinallyforgetting encoding method for neural network language models. In Proceedings of the 53rd Annual Meeting of the Association for Computational Linguistics and the 7th International Joint Conference on Natural Language Processing (Volume 2: Short Papers), volume 2, pages 495–500. Xiang Zhang, Junbo Zhao, and Yann LeCun. 2015b. Character-level convolutional networks for text classification. In NIPS, pages 649–657. Yizhe Zhang, Zhe Gan, Kai Fan, Zhi Chen, Ricardo Henao, Dinghan Shen, and Lawrence Carin. 2017a. Adversarial feature matching for text generation. In ICML. Yizhe Zhang, Dinghan Shen, Guoyin Wang, Zhe Gan, Ricardo Henao, and Lawrence Carin. 2017b. Deconvolutional paragraph representation learning. NIPS. Han Zhao, Zhengdong Lu, and Pascal Poupart. 2015. Self-adaptive hierarchical sentence model. In IJCAI, pages 4069–4076.
2018
41
Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Long Papers), pages 451–462 Melbourne, Australia, July 15 - 20, 2018. c⃝2018 Association for Computational Linguistics 451 PARANMT-50M: Pushing the Limits of Paraphrastic Sentence Embeddings with Millions of Machine Translations John Wieting1 Kevin Gimpel2 1Carnegie Mellon University, Pittsburgh, PA, 15213, USA 2Toyota Technological Institute at Chicago, Chicago, IL, 60637, USA [email protected], [email protected] Abstract We describe PARANMT-50M, a dataset of more than 50 million English-English sentential paraphrase pairs. We generated the pairs automatically by using neural machine translation to translate the nonEnglish side of a large parallel corpus, following Wieting et al. (2017). Our hope is that PARANMT-50M can be a valuable resource for paraphrase generation and can provide a rich source of semantic knowledge to improve downstream natural language understanding tasks. To show its utility, we use PARANMT-50M to train paraphrastic sentence embeddings that outperform all supervised systems on every SemEval semantic textual similarity competition, in addition to showing how it can be used for paraphrase generation.1 1 Introduction While many approaches have been developed for generating or finding paraphrases (Barzilay and McKeown, 2001; Lin and Pantel, 2001; Dolan et al., 2004), there do not exist any freelyavailable datasets with millions of sentential paraphrase pairs. The closest such resource is the Paraphrase Database (PPDB; Ganitkevitch et al., 2013), which was created automatically from bilingual text by pivoting over the non-English language (Bannard and Callison-Burch, 2005). PPDB has been used to improve word embeddings (Faruqui et al., 2015; Mrkˇsi´c et al., 2016). However, PPDB is less useful for learning sentence embeddings (Wieting and Gimpel, 2017). In this paper, we describe the creation of a dataset containing more than 50 million sentential 1 Dataset, code, and embeddings are available at https: //www.cs.cmu.edu/˜jwieting. paraphrase pairs. We create it automatically by scaling up the approach of Wieting et al. (2017). We use neural machine translation (NMT) to translate the Czech side of a large Czech-English parallel corpus. We pair the English translations with the English references to form paraphrase pairs. We call this dataset PARANMT-50M. It contains examples illustrating a broad range of paraphrase phenomena; we show examples in Section 3. PARANMT-50M has the potential to be useful for many tasks, from linguistically controlled paraphrase generation, style transfer, and sentence simplification to core NLP problems like machine translation. We show the utility of PARANMT-50M by using it to train paraphrastic sentence embeddings using the learning framework of Wieting et al. (2016b). We primarily evaluate our sentence embeddings on the SemEval semantic textual similarity (STS) competitions from 2012-2016. Since so many domains are covered in these datasets, they form a demanding evaluation for a general purpose sentence embedding model. Our sentence embeddings learned from PARANMT-50M outperform all systems in every STS competition from 2012 to 2016. These tasks have drawn substantial participation; in 2016, for example, the competition attracted 43 teams and had 119 submissions. Most STS systems use curated lexical resources, the provided supervised training data with manually-annotated similarities, and joint modeling of the sentence pair. We use none of these, simply encoding each sentence independently using our models and computing cosine similarity between their embeddings. We experiment with several compositional architectures and find them all to work well. We find benefit from making a simple change to learning (“mega-batching”) to better leverage the large training set, namely, increasing the search space 452 of negative examples. In the supplementary, we evaluate on general-purpose sentence embedding tasks used in past work (Kiros et al., 2015; Conneau et al., 2017), finding our embeddings to perform competitively. Finally, in Section 6, we briefly report results showing how PARANMT-50M can be used for paraphrase generation. A standard encoderdecoder model trained on PARANMT-50M can generate paraphrases that show effects of “canonicalizing” the input sentence. In other work, fully described by Iyyer et al. (2018), we used PARANMT-50M to generate paraphrases that have a specific syntactic structure (represented as the top two levels of a linearized parse tree). We release the PARANMT-50M dataset, our trained sentence embeddings, and our code. PARANMT-50M is the largest collection of sentential paraphrases released to date. We hope it can motivate new research directions and be used to create powerful NLP models, while adding a robustness to existing ones by incorporating paraphrase knowledge. Our paraphrastic sentence embeddings are state-of-the-art by a significant margin, and we hope they can be useful for many applications both as a sentence representation function and as a general similarity metric. 2 Related Work We discuss work in automatically building paraphrase corpora, learning general-purpose sentence embeddings, and using parallel text for learning embeddings and similarity functions. Paraphrase discovery and generation. Many methods have been developed for generating or finding paraphrases, including using multiple translations of the same source material (Barzilay and McKeown, 2001), using distributional similarity to find similar dependency paths (Lin and Pantel, 2001), using comparable articles from multiple news sources (Dolan et al., 2004; Dolan and Brockett, 2005; Quirk et al., 2004), aligning sentences between standard and Simple English Wikipedia (Coster and Kauchak, 2011), crowdsourcing (Xu et al., 2014, 2015; Jiang et al., 2017), using diverse MT systems to translate a single source sentence (Suzuki et al., 2017), and using tweets with matching URLs (Lan et al., 2017). The most relevant prior work uses bilingual corpora. Bannard and Callison-Burch (2005) used methods from statistical machine translation to find lexical and phrasal paraphrases in parallel text. Ganitkevitch et al. (2013) scaled up these techniques to produce the Paraphrase Database (PPDB). Our goals are similar to those of PPDB, which has likewise been generated for many languages (Ganitkevitch and Callison-Burch, 2014) since it only needs parallel text. In particular, we follow the approach of Wieting et al. (2017), who used NMT to translate the non-English side of parallel text to get English-English paraphrase pairs. We scale up the method to a larger dataset, produce state-of-the-art paraphrastic sentence embeddings, and release all of our resources. Sentence embeddings. Our learning and evaluation setting is the same as that of our recent work that seeks to learn paraphrastic sentence embeddings that can be used for downstream tasks (Wieting et al., 2016b,a; Wieting and Gimpel, 2017; Wieting et al., 2017). We trained models on noisy paraphrase pairs and evaluated them primarily on semantic textual similarity (STS) tasks. Prior work in learning general sentence embeddings has used autoencoders (Socher et al., 2011; Hill et al., 2016), encoder-decoder architectures (Kiros et al., 2015; Gan et al., 2017), and other sources of supervision and learning frameworks (Le and Mikolov, 2014; Pham et al., 2015; Arora et al., 2017; Pagliardini et al., 2017; Conneau et al., 2017). Parallel text for learning embeddings. Prior work has shown that parallel text, and resources built from parallel text like NMT systems and PPDB, can be used for learning embeddings for words and sentences. Several have used PPDB as a knowledge resource for training or improving embeddings (Faruqui et al., 2015; Wieting et al., 2015; Mrkˇsi´c et al., 2016). NMT architectures and training settings have been used to obtain better embeddings for words (Hill et al., 2014a,b) and words-in-context (McCann et al., 2017). Hill et al. (2016) evaluated the encoders of Englishto-X NMT systems as sentence representations. Mallinson et al. (2017) adapted trained NMT models to produce sentence similarity scores in semantic evaluations. 3 The PARANMT-50M Dataset To create our dataset, we used back-translation of bitext (Wieting et al., 2017). We used a CzechEnglish NMT system to translate Czech sentences 453 Dataset Avg. Length Avg. IDF Avg. Para. Score Vocab. Entropy Parse Entropy Total Size Common Crawl 24.0±34.7 7.7±1.1 0.83±0.16 7.2 3.5 0.16M CzEng 1.6 13.3±19.3 7.4±1.2 0.84±0.16 6.8 4.1 51.4M Europarl 26.1±15.4 7.1±0.6 0.95±0.05 6.4 3.0 0.65M News Commentary 25.2±13.9 7.5±1.1 0.92±0.12 7.0 3.4 0.19M Table 1: Statistics of 100K-samples of Czech-English parallel corpora; standard deviations are shown for averages. Reference Translation Machine Translation so, what’s half an hour? half an hour won’t kill you. well, don’t worry. i’ve taken out tons and tons of guys. lots of guys. don’t worry, i’ve done it to dozens of men. it’s gonna be ...... classic. yeah, sure. it’s gonna be great. greetings, all! hello everyone! but she doesn’t have much of a case. but as far as the case goes, she doesn’t have much. it was good in spite of the taste. despite the flavor, it felt good. Table 2: Example paraphrase pairs from PARANMT-50M, where each consists of an English reference translation and the machine translation of the Czech source sentence (not shown). from the training data into English. We paired the translations with the English references to form English-English paraphrase pairs. We used the pretrained Czech-English model from the NMT system of Sennrich et al. (2017). Its training data includes four sources: Common Crawl, CzEng 1.6 (Bojar et al., 2016), Europarl, and News Commentary. We did not choose Czech due to any particular linguistic properties. Wieting et al. (2017) found little difference among Czech, German, and French as source languages for backtranslation. There were much larger differences due to data domain, so we focus on the question of domain in this section. We leave the question of investigating properties of back-translation of different languages to future work. 3.1 Choosing a Data Source To assess characteristics that yield useful data, we randomly sampled 100K English reference translations from each data source and computed statistics. Table 1 shows the average sentence length, the average inverse document frequency (IDF) where IDFs are computed using Wikipedia sentences, and the average paraphrase score for the two sentences. The paraphrase score is calculated by averaging PARAGRAM-PHRASE embeddings (Wieting et al., 2016b) for the two sentences in each pair and then computing their cosine similarity. The table also shows the entropies of the vocabularies and constituent parses obtained using the Stanford Parser (Manning et al., 2014).2 Europarl exhibits the least diversity in terms of 2To mitigate sparsity in the parse entropy, we used only the top two levels of each parse tree. rare word usage, vocabulary entropy, and parse entropy. This is unsurprising given its formulaic and repetitive nature. CzEng has shorter sentences than the other corpora and more diverse sentence structures, as shown by its high parse entropy. In terms of vocabulary use, CzEng is not particularly more diverse than Common Crawl and News Commentary, though this could be due to the prevalence of named entities in the latter two. In Section 5.3, we empirically compare these data sources as training data for sentence embeddings. The CzEng corpus yields the strongest performance when controlling for training data size. Since its sentences are short, we suspect this helps ensure high-quality back-translations. A large portion of it is movie subtitles which tend to use a wide vocabulary and have a diversity of sentence structures; however, other domains are included as well. It is also the largest corpus, containing over 51 million sentence pairs. In addition to providing a large number of training examples for downstream tasks, this means that the NMT system should be able to produce quality translations for this subset of its training data. For all of these reasons, we chose the CzEng corpus to create PARANMT-50M. When doing so, we used beam search with a beam size of 12 and selected the highest scoring translation from the beam. It took over 10,000 GPU hours to backtranslate the CzEng corpus. We show illustrative examples in Table 2. 3.2 Manual Evaluation We conducted a manual analysis of our dataset in order to quantify its noise level and assess how the 454 Para. Score # Avg. Tri. Paraphrase Fluency Range (M) Overlap 1 2 3 1 2 3 (-0.1, 0.2] 4.0 0.00±0.0 92 6 2 1 5 94 (0.2, 0.4] 3.8 0.02±0.1 53 32 15 1 12 87 (0.4, 0.6] 6.9 0.07±0.1 22 45 33 2 9 89 (0.6, 0.8] 14.4 0.17±0.2 1 43 56 11 0 89 (0.8, 1.0] 18.0 0.35±0.2 1 13 86 3 0 97 Table 3: Manual evaluation of PARANMT-50M. 100-pair samples were drawn from five ranges of the automatic paraphrase score (first column). Paraphrase strength and fluency were judged on a 1-3 scale and counts of each rating are shown. noise can be ameliorated with filtering. Two native English speakers annotated a sample of 100 examples from each of five ranges of the Paraphrase Score.3 We obtained annotations for both the strength of the paraphrase relationship and the fluency of the translations. To annotate paraphrase strength, we adopted the annotation guidelines used by Agirre et al. (2012). The original guidelines specify six classes, which we reduced to three for simplicity. We combined the top two into one category, left the next, and combined the bottom three into the lowest category. Therefore, for a sentence pair to have a rating of 3, the sentences must have the same meaning, but some unimportant details can differ. To have a rating of 2, the sentences are roughly equivalent, with some important information missing or that differs slightly. For a rating of 1, the sentences are not equivalent, even if they share minor details. For fluency of the back-translation, we use the following: A rating of 3 means it has no grammatical errors, 2 means it has one to two errors, and 1 means it has more than two grammatical errors or is not a natural English sentence. Table 3 summarizes the annotations. For each score range, we report the number of pairs, the mean trigram overlap score, and the number of times each paraphrase/fluency label was present in the sample of 100 pairs. There is noise but it is largely confined to the bottom two ranges which together comprise only 16% of the entire dataset. In the highest paraphrase score range, 86% of the pairs possess a strong paraphrase relationship. The annotations suggest that PARANMT-50M contains approximately 30 million strong paraphrase pairs, and that the paraphrase score is a good indi3Even though the similarity score lies in [−1, 1], most observed scores were positive, so we chose the five ranges shown in Table 3. cator of quality. At the low ranges, we inspected the data and found there to be many errors in the sentence alignment in the original bitext. With regards to fluency, approximately 90% of the backtranslations are fluent, even at the low end of the paraphrase score range. We do see an outlier at the second-highest range of the paraphrase score, but this may be due to the small number of annotated examples. 4 Learning Sentence Embeddings To show the usefulness of the PARANMT-50M dataset, we will use it to train sentence embeddings. We adopt the learning framework from Wieting et al. (2016b), which was developed to train sentence embeddings from pairs in PPDB. We first describe the compositional sentence embedding models we will experiment with, then discuss training and our modification (“megabatching”). Models. We want to embed a word sequence s into a fixed-length vector. We denote the tth word in s as st, and we denote its word embedding by xt. We focus on three model families, though we also experiment with combining them in various ways. The first, which we call WORD, simply averages the embeddings xt of all words in s. This model was found by Wieting et al. (2016b) to perform strongly for semantic similarity tasks. The second is similar to WORD, but instead of word embeddings, we average character trigram embeddings (Huang et al., 2013). We call this TRIGRAM. Wieting et al. (2016a) found this to work well for sentence embeddings compared to other n-gram orders and to word averaging. The third family includes long short-term memory (LSTM) networks (Hochreiter and Schmidhuber, 1997). We average the hidden states to produce the final sentence embedding. For regularization during training, we scramble words with a small probability (Wieting and Gimpel, 2017). We also experiment with bidirectional LSTMs (BLSTM), averaging the forward and backward hidden states with no concatenation.4 Training. The training data is a set S of paraphrase pairs ⟨s, s′⟩and we minimize a margin4Unlike Conneau et al. (2017), we found this to outperform max-pooling for both semantic similarity and general sentence embedding tasks. 455 based loss ℓ(s, s′) = max(0, δ −cos(g(s), g(s′)) + cos(g(s), g(t))) where g is the model (WORD, TRIGRAM, etc.), δ is the margin, and t is a “negative example” taken from a mini-batch during optimization. The intuition is that we want the two texts to be more similar to each other than to their negative examples. To select t we choose the most similar sentence in some set. For simplicity we use the mini-batch for this set, i.e., t = argmax t′:⟨t′,·⟩∈Sb\{⟨s,s′⟩} cos(g(s), g(t′)) where Sb ⊆S is the current mini-batch. Modification: mega-batching. By using the mini-batch to select negative examples, we may be limiting the learning procedure. That is, if all potential negative examples in the mini-batch are highly dissimilar from s, the loss will be too easy to minimize. Stronger negative examples can be obtained by using larger mini-batches, but large mini-batches are sub-optimal for optimization. Therefore, we propose a procedure we call “mega-batching.” We aggregate M mini-batches to create one mega-batch and select negative examples from the mega-batch. Once each pair in the mega-batch has a negative example, the megabatch is split back up into M mini-batches and training proceeds. We found that this provides more challenging negative examples during learning as shown in Section 5.5. Table 6 shows results for different values of M, showing consistently higher correlations with larger M values. 5 Experiments We now investigate how best to use our generated paraphrase data for training paraphrastic sentence embeddings. 5.1 Evaluation We evaluate sentence embeddings using the SemEval semantic textual similarity (STS) tasks from 2012 to 2016 (Agirre et al., 2012, 2013, 2014, 2015, 2016) and the STS Benchmark (Cer et al., 2017). Given two sentences, the aim of the STS tasks is to predict their similarity on a 0-5 scale, where 0 indicates the sentences are on different topics and 5 means they are completely equivalent. As our test set, we report the average Pearson’s r Training Corpus WORD TRIGRAM LSTM Common Crawl 80.9 80.2 79.1 CzEng 1.6 83.6 81.5 82.5 Europarl 78.9 78.0 80.4 News Commentary 80.2 78.2 80.5 Table 4: Pearson’s r × 100 on STS2017 when training on 100k pairs from each back-translated parallel corpus. CzEng works best for all models. over each year of the STS tasks from 2012-2016. We use the small (250-example) English dataset from SemEval 2017 (Cer et al., 2017) as a development set, which we call STS2017 below. The supplementary material contains a description of a method to obtain a paraphrase lexicon from PARANMT-50M that is on par with that provided by PPDB 2.0. We also evaluate our sentence embeddings on a range of additional tasks that have previously been used for evaluating sentence representations (Kiros et al., 2015). 5.2 Experimental Setup For training sentence embeddings on PARANMT50M, we follow the experimental procedure of Wieting et al. (2016b). We use PARAGRAMSL999 embeddings (Wieting et al., 2015) to initialize the word embedding matrix for all models that use word embeddings. We fix the mini-batch size to 100 and the margin δ to 0.4. We train all models for 5 epochs. For optimization we use Adam (Kingma and Ba, 2014) with a learning rate of 0.001. For the LSTM and BLSTM, we fixed the scrambling rate to 0.3.5 5.3 Dataset Comparison We first compare parallel data sources. We evaluate the quality of a data source by using its backtranslations paired with its English references as training data for paraphrastic sentence embeddings. We compare the four data sources described in Section 3. We use 100K samples from each corpus and trained 3 different models on each: WORD, TRIGRAM, and LSTM. Table 4 shows that CzEng provides the best training data for all models, so we used it to create PARANMT-50M and for all remaining experiments. 5As in our prior work (Wieting and Gimpel, 2017), we found that scrambling significantly improves results, even with our much larger training set. But while we previously used a scrambling rate of 0.5, we found that a smaller rate of 0.3 worked better when training on PARANMT-50M, presumably due to the larger training set. 456 Filtering Method Model Avg. Translation Score 83.2 Trigram Overlap 83.1 Paraphrase Score 83.3 Table 5: Pearson’s r × 100 on STS2017 for the best training fold across the average of WORD, TRIGRAM, and LSTM models for each filtering method. CzEng is diverse in terms of both vocabulary and sentence structure. It has significantly shorter sentences than the other corpora, and has much more training data, so its translations are expected to be better than those in the other corpora. Wieting et al. (2017) found that sentence length was the most important factor in filtering quality training data, presumably due to how NMT quality deteriorates with longer sentences. We suspect that better translations yield better data for training sentence embeddings. 5.4 Data Filtering Since the PARANMT-50M dataset is so large, it is computationally demanding to train sentence embeddings on it in its entirety. So, we filter the data to create a training set for sentence embeddings. We experiment with three simple methods: (1) the length-normalized translation score from decoding, (2) trigram overlap (Wieting et al., 2017), and (3) the paraphrase score from Section 3. Trigram overlap is calculated by counting trigrams in the reference and translation, then dividing the number of shared trigrams by the total number in the reference or translation, whichever has fewer. We filtered the back-translated CzEng data using these three strategies. We ranked all 51M+ paraphrase pairs in the dataset by the filtering measure under consideration and then split the data into tenths (so the first tenth contains the bottom 10% under the filtering criterion, the second contains those in the bottom 10-20%, etc.). We trained WORD, TRIGRAM, and LSTM models for a single epoch on 1M examples sampled from each of the ten folds for each filtering criterion. We averaged the correlation on the STS2017 data across models for each fold. Table 5 shows the results of the filtering methods. Filtering based on the paraphrase score produces the best data for training sentence embeddings. We randomly selected 5M examples from the top two scoring folds using paraphrase score filM WORD TRIGRAM LSTM 1 82.3 81.5 81.5 20 84.0 83.1 84.6 40 84.1 83.4 85.0 Table 6: Pearson’s r × 100 on STS2017 with different mega-batch sizes M. original sir, i’m just trying to protect. negative examples: M =1 i mean, colonel... M =20 i only ask that the baby be safe. M =40 just trying to survive. on instinct. original i’m looking at him, you know? M =1 they know that i’ve been looking for her. M =20 i’m keeping him. M =40 i looked at him with wonder. original i’il let it go a couple of rounds. M =1 sometimes the ball doesn’t go down. M =20 i’ll take two. M =40 i want you to sit out a couple of rounds, all right? Table 7: Negative examples for various megabatch sizes M with the BLSTM model. tering, ensuring that we only selected examples in which both sentences have a maximum length of 30 tokens.6 These resulting 5M examples form the training data for the rest of our experiments. Note that many more than 5M pairs from the dataset are useful, as suggested by our human evaluations in Section 3.2. We have experimented with doubling the training data when training our best sentence similarity model and found the correlation increased by more than half a percentage point on average across all datasets. 5.5 Effect of Mega-Batching Table 6 shows the impact of varying the megabatch size M when training for 5 epochs on our 5M-example training set. For all models, larger mega-batches improve performance. There is a smaller gain when moving from 20 to 40, but all models show clear gains over M = 1. Table 7 shows negative examples with different mega-batch sizes M. We use the BLSTM model and show the negative examples (nearest neighbors from the mega-batch excluding the current training example) for three sentences. Using larger mega-batches improves performance, presumably by producing more compelling negative examples for the learning procedure. This is likely more important when training on sentences than 6Wieting et al. (2017) found that sentence length cutoffs were effective for filtering back-translated parallel text. 457 Training Data Model Dim. 2012 2013 2014 2015 2016 WORD 300 66.2 61.8 76.2 79.3 77.5 TRIGRAM 300 67.2 60.3 76.1 79.7 78.3 LSTM 300 67.0 62.3 76.3 78.5 76.0 LSTM 900 68.0 60.4 76.3 78.8 75.9 Our PARANMT BLSTM 900 67.4 60.2 76.1 79.5 76.5 Work WORD + TRIGRAM (addition) 300 67.3 62.8 77.5 80.1 78.2 WORD + TRIGRAM + LSTM (addition) 300 67.1 62.8 76.8 79.2 77.0 WORD, TRIGRAM (concatenation) 600 67.8 62.7 77.4 80.3 78.1 WORD, TRIGRAM, LSTM (concatenation) 900 67.7 62.8 76.9 79.8 76.8 SimpWiki WORD, TRIGRAM (concatenation) 600 61.8 58.4 74.4 77.0 74.0 1st Place System 64.8 62.0 74.3 79.0 77.7 STS Competitions 2nd Place System 63.4 59.1 74.2 78.0 75.7 3rd Place System 64.1 58.3 74.3 77.8 75.7 InferSent (AllSNLI) (Conneau et al., 2017) 4096 58.6 51.5 67.8 68.3 67.2 InferSent (SNLI) (Conneau et al., 2017) 4096 57.1 50.4 66.2 65.2 63.5 FastSent (Hill et al., 2016) 100 63 DictRep (Hill et al., 2016) 500 67 Related Work SkipThought (Kiros et al., 2015) 4800 29 CPHRASE (Pham et al., 2015) 65 CBOW (from Hill et al., 2016) 500 64 BLEU (Papineni et al., 2002) 39.2 29.5 42.8 49.8 47.4 METEOR (Denkowski and Lavie, 2014) 53.4 47.6 63.7 68.8 61.8 Table 8: Pearson’s r × 100 on the STS tasks of our models and those from related work. We compare to the top performing systems from each SemEval STS competition. Note that we are reporting the mean correlations over domains for each year rather than weighted means as used in the competitions. Our best performing overall model (WORD, TRIGRAM) is in bold. Dim. Corr. Our Work (Unsupervised) WORD 300 79.2 TRIGRAM 300 79.1 LSTM 300 78.4 WORD + TRIGRAM (addition) 300 79.9 WORD + TRIGRAM + LSTM (addition) 300 79.6 WORD, TRIGRAM (concatenation) 600 79.9 WORD, TRIGRAM, LSTM (concatenation) 900 79.2 Related Work (Unsupervised) InferSent (AllSNLI) (Conneau et al., 2017) 4096 70.6 C-PHRASE (Pham et al., 2015) 63.9 GloVe (Pennington et al., 2014) 300 40.6 word2vec (Mikolov et al., 2013) 300 56.5 sent2vec (Pagliardini et al., 2017) 700 75.5 Related Work (Supervised) Dep. Tree LSTM (Tai et al., 2015) 71.2 Const. Tree LSTM (Tai et al., 2015) 71.9 CNN (Shao, 2017) 78.4 Table 9: Results on STS Benchmark test set. prior work on learning from text snippets (Wieting et al., 2015, 2016b; Pham et al., 2015). 5.6 Model Comparison Table 8 shows results on the 2012-2016 STS tasks and Table 9 shows results on the STS Benchmark.7 Our best models outperform all STS competition systems and all related work of which we are 7Baseline results are from http://ixa2.si.ehu. es/stswiki/index.php/STSbenchmark, except for the unsupervised InferSent result which we computed. Models Mean Pearson Abs. Diff. WORD / TRIGRAM 2.75 WORD / LSTM 2.17 TRIGRAM / LSTM 2.89 Table 10: The means (over all 25 STS competition datasets) of the absolute differences in Pearson’s r between each pair of models. aware on the 2012-2016 STS datasets. Note that the large improvement over BLEU and METEOR suggests that our embeddings could be useful for evaluating machine translation output. Overall, our individual models (WORD, TRIGRAM, LSTM) perform similarly. Using 300 dimensions appears to be sufficient; increasing dimensionality does not necessarily improve correlation. When examining particular STS tasks, we found that our individual models showed marked differences on certain tasks. Table 10 shows the mean absolute difference in Pearson’s r over all 25 datasets. The TRIGRAM model shows the largest differences from the other two, both of which use word embeddings. This suggests that TRIGRAM may be able to complement the other two by providing information about words that are unknown to models that rely on word embeddings. We experiment with two ways of combining models. The first is to define additive architectures 458 Target Syntax Paraphrase original with the help of captain picard, the borg will be prepared for everything. (SBARQ(ADVP)(,)(S)(,)(SQ)) now, the borg will be prepared by picard, will it? (S(NP)(ADVP)(VP)) the borg here will be prepared for everything. original you seem to be an excellent burglar when the time comes. (S(SBAR)(,)(NP)(VP)) when the time comes, you’ll be a great thief. (S(‘‘)(UCP)(’’)(NP)(VP)) “you seem to be a great burglar, when the time comes.” you said. Table 11: Syntactically controlled paraphrases generated by the SCPN trained on PARANMT-50M. that form the embedding for a sentence by adding the embeddings computed by two (or more) individual models. All parameters are trained jointly just like when we train individual models; that is, we do not first train two simple models and add their embeddings. The second way is to define concatenative architectures that form a sentence embedding by concatenating the embeddings computed by individual models, and again to train all parameters jointly. In Table 8 and Table 9, these combinations show consistent improvement over the individual models as well as the larger LSTM and BLSTM. Concatenating WORD and TRIGRAM results in the best performance on average across STS tasks, outperforming the best supervised systems from each year. We have released the pretrained model for these “WORD, TRIGRAM” embeddings. In addition to providing a strong baseline for future STS tasks, these embeddings offer the advantages of being extremely efficient to compute and being robust to unknown words. We show the usefulness of PARANMT by also reporting the results of training the “WORD, TRIGRAM” model on SimpWiki, a dataset of aligned sentences from Simple English and standard English Wikipedia (Coster and Kauchak, 2011). It has been shown useful for training sentence embeddings in past work (Wieting and Gimpel, 2017). However, Table 8 shows that training on PARANMT leads to gains in correlation of 3 to 6 points compared to SimpWiki. 6 Paraphrase Generation In addition to powering state-of-the-art paraphrastic sentence embeddings, our dataset is useful for paraphrase generation. We briefly describe two efforts in paraphrase generation here. We have found that training an encoder-decoder model on PARANMT-50M can produce a paraphrase generation model that canonicalizes text. For this experiment, we used a bidirectional LSTM encoder and a two-layer LSTM decoder original overall, i that it’s a decent buy, and am happy that i own it. paraphrase it’s a good buy, and i’m happy to own it. original oh, that’s a handsome women, that is. paraphrase that’s a beautiful woman. Table 12: Examples from our paraphrase generation model that show the ability to canonicalize text and correct grammatical errors. with soft attention over the encoded states (Bahdanau et al., 2015). The attention computation consists of a bilinear product with a learned parameter matrix. Table 12 shows examples of output generated by this model, showing how the model is able to standardize the text and correct grammatical errors. This model would be interesting to evaluate for automatic grammar correction as it does so without any direct supervision. Future work could also use this canonicalization to improve performance of models by standardizing inputs and removing noise from data. PARANMT-50M has also been used for syntactically-controlled paraphrase generation; this work is described in detail by Iyyer et al. (2018). A syntactically controlled paraphrase network (SCPN) is trained to generate a paraphrase of a sentence whose constituent structure follows a provided parse template. A parse template contains the top two levels of a linearized parse tree. Table 11 shows example outputs using the SCPN. The paraphrases mostly preserve the semantics of the input sentences while changing their syntax to fit the target syntactic templates. The SCPN was used for augmenting training data and finding adversarial examples. We believe that PARANMT-50M and future datasets like it can be used to generate rich paraphrases that improve the performance and robustness of models on a multitude of NLP tasks. 7 Discussion One way to view PARANMT-50M is as a way to represent the learned translation model in a mono459 lingual generated dataset. This raises the question of whether we could learn an effective sentence embedding model from the original parallel text used to train the NMT system, rather than requiring the intermediate step of generating a paraphrase training set. However, while Hill et al. (2016) and Mallinson et al. (2017) used trained NMT models to produce sentence similarity scores, their correlations are considerably lower than ours (by 10% to 35% absolute in terms of Pearson). It appears that NMT encoders form representations that do not necessarily encode the semantics of the sentence in a way conducive to STS evaluations. They must instead create representations suitable for a decoder to generate a translation. These two goals of representing sentential semantics and producing a translation, while likely correlated, evidently have some significant differences. Our use of an intermediate dataset leads to the best results, but this may be due to our efforts in optimizing learning for this setting (Wieting et al., 2016b; Wieting and Gimpel, 2017). Future work will be needed to develop learning frameworks that can leverage parallel text directly to reach the same or improved correlations on STS tasks. 8 Conclusion We described the creation of PARANMT-50M, a dataset of more than 50M English sentential paraphrase pairs. We showed how to use PARANMT50M to train paraphrastic sentence embeddings that outperform supervised systems on STS tasks, as well as how it can be used for generating paraphrases for purposes of data augmentation, robustness, and even grammar correction. The key advantage of our approach is that it only requires parallel text. There are hundreds of millions of parallel sentence pairs, and more are being generated continually. Our procedure is immediately applicable to the wide range of languages for which we have parallel text. We release PARANMT-50M, our code, and pretrained sentence embeddings, which also exhibit strong performance as general-purpose representations for a multitude of tasks. We hope that PARANMT-50M, along with our embeddings, can impart a notion of meaning equivalence to improve NLP systems for a variety of tasks. We are actively investigating ways to apply these two new resources to downstream applications, including machine translation, question answering, and additional paraphrase generation tasks. Acknowledgments We thank the anonymous reviewers, the developers of Theano (Theano Development Team, 2016), the developers of PyTorch (Paszke et al., 2017), and NVIDIA Corporation for donating GPUs used in this research. References Eneko Agirre, Carmen Banea, Claire Cardie, Daniel Cer, Mona Diab, Aitor Gonzalez-Agirre, Weiwei Guo, Inigo Lopez-Gazpio, Montse Maritxalar, Rada Mihalcea, German Rigau, Larraitz Uria, and Janyce Wiebe. 2015. SemEval-2015 task 2: Semantic textual similarity, English, Spanish and pilot on interpretability. In Proceedings of the 9th International Workshop on Semantic Evaluation (SemEval 2015). Eneko Agirre, Carmen Banea, Claire Cardie, Daniel Cer, Mona Diab, Aitor Gonzalez-Agirre, Weiwei Guo, Rada Mihalcea, German Rigau, and Janyce Wiebe. 2014. SemEval-2014 task 10: Multilingual semantic textual similarity. In Proceedings of the 8th International Workshop on Semantic Evaluation (SemEval 2014). Eneko Agirre, Carmen Banea, Daniel Cer, Mona Diab, Aitor Gonzalez-Agirre, Rada Mihalcea, German Rigau, and Janyce Wiebe. 2016. SemEval-2016 task 1: Semantic textual similarity, monolingual and cross-lingual evaluation. Proceedings of SemEval, pages 497–511. Eneko Agirre, Daniel Cer, Mona Diab, Aitor GonzalezAgirre, and Weiwei Guo. 2013. *SEM 2013 shared task: Semantic textual similarity. In Second Joint Conference on Lexical and Computational Semantics (*SEM), Volume 1: Proceedings of the Main Conference and the Shared Task: Semantic Textual Similarity. Eneko Agirre, Mona Diab, Daniel Cer, and Aitor Gonzalez-Agirre. 2012. SemEval-2012 task 6: A pilot on semantic textual similarity. In Proceedings of the First Joint Conference on Lexical and Computational Semantics-Volume 1: Proceedings of the main conference and the shared task, and Volume 2: Proceedings of the Sixth International Workshop on Semantic Evaluation. Association for Computational Linguistics. Sanjeev Arora, Yingyu Liang, and Tengyu Ma. 2017. A simple but tough-to-beat baseline for sentence embeddings. In Proceedings of the International Conference on Learning Representations. Dzmitry Bahdanau, Kyunghyun Cho, and Yoshua Bengio. 2015. Neural machine translation by jointly learning to align and translate. In Proceedings of 460 the International Conference on Learning Representations. Colin Bannard and Chris Callison-Burch. 2005. Paraphrasing with bilingual parallel corpora. In Proceedings of the 43rd Annual Meeting on Association for Computational Linguistics. Regina Barzilay and Kathleen R McKeown. 2001. Extracting paraphrases from a parallel corpus. In Proceedings of the 39th annual meeting on Association for Computational Linguistics, pages 50–57. Ondˇrej Bojar, Ondˇrej Duˇsek, Tom Kocmi, Jindˇrich Libovick´y, Michal Nov´ak, Martin Popel, Roman Sudarikov, and Duˇsan Variˇs. 2016. CzEng 1.6: Enlarged Czech-English Parallel Corpus with Processing Tools Dockered. In Text, Speech, and Dialogue: 19th International Conference, TSD 2016, number 9924 in Lecture Notes in Computer Science, pages 231–238, Cham / Heidelberg / New York / Dordrecht / London. Masaryk University, Springer International Publishing. Daniel Cer, Mona Diab, Eneko Agirre, Inigo LopezGazpio, and Lucia Specia. 2017. SemEval-2017 Task 1: Semantic textual similarity multilingual and crosslingual focused evaluation. In Proceedings of the 11th International Workshop on Semantic Evaluation (SemEval-2017), pages 1–14, Vancouver, Canada. Alexis Conneau, Douwe Kiela, Holger Schwenk, Lo¨ıc Barrault, and Antoine Bordes. 2017. Supervised learning of universal sentence representations from natural language inference data. In Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing, pages 670–680, Copenhagen, Denmark. William Coster and David Kauchak. 2011. Simple English Wikipedia: a new text simplification task. In Proceedings of the 49th Annual Meeting of the Association for Computational Linguistics: Human Language Technologies: short papers-Volume 2, pages 665–669. Michael Denkowski and Alon Lavie. 2014. Meteor universal: Language specific translation evaluation for any target language. In Proceedings of the EACL 2014 Workshop on Statistical Machine Translation. Bill Dolan, Chris Quirk, and Chris Brockett. 2004. Unsupervised construction of large paraphrase corpora: Exploiting massively parallel news sources. In Proceedings of the 20th international conference on Computational Linguistics, page 350. William B Dolan and Chris Brockett. 2005. Automatically constructing a corpus of sentential paraphrases. In Proceedings of the Third International Workshop on Paraphrasing (IWP2005). Manaal Faruqui, Jesse Dodge, Sujay Kumar Jauhar, Chris Dyer, Eduard Hovy, and Noah A. Smith. 2015. Retrofitting word vectors to semantic lexicons. In Proceedings of the 2015 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies. Zhe Gan, Yunchen Pu, Ricardo Henao, Chunyuan Li, Xiaodong He, and Lawrence Carin. 2017. Learning generic sentence representations using convolutional neural networks. In Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing, pages 2380–2390, Copenhagen, Denmark. Juri Ganitkevitch and Chris Callison-Burch. 2014. The multilingual paraphrase database. In Proceedings of the Ninth International Conference on Language Resources and Evaluation (LREC-2014). Juri Ganitkevitch, Benjamin Van Durme, and Chris Callison-Burch. 2013. PPDB: The Paraphrase Database. In Proceedings of the 2013 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 758–764, Atlanta, Georgia. Felix Hill, Kyunghyun Cho, Sebastien Jean, Coline Devin, and Yoshua Bengio. 2014a. Embedding word similarity with neural machine translation. arXiv preprint arXiv:1412.6448. Felix Hill, KyungHyun Cho, Sebastien Jean, Coline Devin, and Yoshua Bengio. 2014b. Not all neural embeddings are born equal. arXiv preprint arXiv:1410.0718. Felix Hill, Kyunghyun Cho, and Anna Korhonen. 2016. Learning distributed representations of sentences from unlabelled data. In Proceedings of the 2016 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies. Sepp Hochreiter and J¨urgen Schmidhuber. 1997. Long short-term memory. Neural computation, 9(8). Po-Sen Huang, Xiaodong He, Jianfeng Gao, Li Deng, Alex Acero, and Larry Heck. 2013. Learning deep structured semantic models for web search using clickthrough data. In Proceedings of the 22nd ACM international conference on Conference on information & knowledge management. Mohit Iyyer, John Wieting, Kevin Gimpel, and Luke Zettlemoyer. 2018. Adversarial example generation with syntactically controlled paraphrase networks. In Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies. Youxuan Jiang, Jonathan K. Kummerfeld, and Walter S. Lasecki. 2017. Understanding task design trade-offs in crowdsourced paraphrase collection. In Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (Volume 2: Short Papers), pages 103–109, Vancouver, Canada. 461 Diederik Kingma and Jimmy Ba. 2014. Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980. Ryan Kiros, Yukun Zhu, Ruslan R Salakhutdinov, Richard Zemel, Raquel Urtasun, Antonio Torralba, and Sanja Fidler. 2015. Skip-thought vectors. In Advances in Neural Information Processing Systems 28, pages 3294–3302. Wuwei Lan, Siyu Qiu, Hua He, and Wei Xu. 2017. A continuously growing dataset of sentential paraphrases. In Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing, pages 1224–1234, Copenhagen, Denmark. Quoc V. Le and Tomas Mikolov. 2014. Distributed representations of sentences and documents. arXiv preprint arXiv:1405.4053. Dekang Lin and Patrick Pantel. 2001. Discovery of inference rules for question answering. Natural Language Engineering, 7(4):342–360. Jonathan Mallinson, Rico Sennrich, and Mirella Lapata. 2017. Paraphrasing revisited with neural machine translation. In Proceedings of the 15th Conference of the European Chapter of the Association for Computational Linguistics: Volume 1, Long Papers, pages 881–893. Christopher D. Manning, Mihai Surdeanu, John Bauer, Jenny Finkel, Steven J. Bethard, and David McClosky. 2014. The Stanford CoreNLP natural language processing toolkit. In Proceedings of 52nd Annual Meeting of the Association for Computational Linguistics: System Demonstrations. Bryan McCann, James Bradbury, Caiming Xiong, and Richard Socher. 2017. Learned in translation: Contextualized word vectors. In Advances in Neural Information Processing Systems, pages 6297–6308. Tomas Mikolov, Ilya Sutskever, Kai Chen, Greg S. Corrado, and Jeff Dean. 2013. Distributed representations of words and phrases and their compositionality. In Advances in Neural Information Processing Systems. Nikola Mrkˇsi´c, Diarmuid ´O S´eaghdha, Blaise Thomson, Milica Gaˇsi´c, Lina M. Rojas-Barahona, PeiHao Su, David Vandyke, Tsung-Hsien Wen, and Steve Young. 2016. Counter-fitting word vectors to linguistic constraints. In Proceedings of the 2016 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 142–148, San Diego, California. Matteo Pagliardini, Prakhar Gupta, and Martin Jaggi. 2017. Unsupervised learning of sentence embeddings using compositional n-gram features. arXiv preprint arXiv:1703.02507. Kishore Papineni, Salim Roukos, Todd Ward, and WeiJing Zhu. 2002. Bleu: a method for automatic evaluation of machine translation. In Proceedings of 40th Annual Meeting of the Association for Computational Linguistics, pages 311–318, Philadelphia, Pennsylvania, USA. Adam Paszke, Sam Gross, Soumith Chintala, Gregory Chanan, Edward Yang, Zachary DeVito, Zeming Lin, Alban Desmaison, Luca Antiga, and Adam Lerer. 2017. Automatic differentiation in PyTorch. Jeffrey Pennington, Richard Socher, and Christopher D. Manning. 2014. Glove: Global vectors for word representation. Proceedings of Empirical Methods in Natural Language Processing (EMNLP 2014). Nghia The Pham, Germ´an Kruszewski, Angeliki Lazaridou, and Marco Baroni. 2015. Jointly optimizing word representations for lexical and sentential tasks with the c-phrase model. In Proceedings of the 53rd Annual Meeting of the Association for Computational Linguistics and the 7th International Joint Conference on Natural Language Processing (Volume 1: Long Papers). Chris Quirk, Chris Brockett, and William Dolan. 2004. Monolingual machine translation for paraphrase generation. In Proceedings of EMNLP 2004, pages 142–149, Barcelona, Spain. Rico Sennrich, Orhan Firat, Kyunghyun Cho, Alexandra Birch, Barry Haddow, Julian Hitschler, Marcin Junczys-Dowmunt, Samuel L¨aubli, Antonio Valerio Miceli Barone, Jozef Mokry, and Maria Nadejde. 2017. Nematus: a toolkit for neural machine translation. In Proceedings of the Software Demonstrations of the 15th Conference of the European Chapter of the Association for Computational Linguistics, pages 65–68, Valencia, Spain. Yang Shao. 2017. HCTI at SemEval-2017 task 1: Use convolutional neural network to evaluate semantic textual similarity. In Proceedings of the 11th International Workshop on Semantic Evaluation (SemEval-2017), pages 130–133. Richard Socher, Eric H. Huang, Jeffrey Pennington, Andrew Y. Ng, and Christopher D. Manning. 2011. Dynamic pooling and unfolding recursive autoencoders for paraphrase detection. In Advances in Neural Information Processing Systems. Yui Suzuki, Tomoyuki Kajiwara, and Mamoru Komachi. 2017. Building a non-trivial paraphrase corpus using multiple machine translation systems. In Proceedings of ACL 2017, Student Research Workshop, pages 36–42, Vancouver, Canada. Association for Computational Linguistics. Kai Sheng Tai, Richard Socher, and Christopher D. Manning. 2015. Improved semantic representations from tree-structured long short-term memory networks. In Proceedings of the 53rd Annual Meeting of the Association for Computational Linguistics 462 and the 7th International Joint Conference on Natural Language Processing (Volume 1: Long Papers). Theano Development Team. 2016. Theano: A Python framework for fast computation of mathematical expressions. arXiv e-prints, abs/1605.02688. John Wieting, Mohit Bansal, Kevin Gimpel, and Karen Livescu. 2016a. Charagram: Embedding words and sentences via character n-grams. In Proceedings of the 2016 Conference on Empirical Methods in Natural Language Processing, pages 1504–1515. John Wieting, Mohit Bansal, Kevin Gimpel, and Karen Livescu. 2016b. Towards universal paraphrastic sentence embeddings. In Proceedings of the International Conference on Learning Representations. John Wieting, Mohit Bansal, Kevin Gimpel, Karen Livescu, and Dan Roth. 2015. From paraphrase database to compositional paraphrase model and back. Transactions of the Association for Computational Linguistics. John Wieting and Kevin Gimpel. 2017. Revisiting recurrent networks for paraphrastic sentence embeddings. In Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 2078–2088, Vancouver, Canada. John Wieting, Jonathan Mallinson, and Kevin Gimpel. 2017. Learning paraphrastic sentence embeddings from back-translated bitext. In Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing, pages 274–285, Copenhagen, Denmark. Wei Xu, Chris Callison-Burch, and William B Dolan. 2015. SemEval-2015 task 1: Paraphrase and semantic similarity in Twitter (PIT). In Proceedings of the 9th International Workshop on Semantic Evaluation (SemEval). Wei Xu, Alan Ritter, Chris Callison-Burch, William B. Dolan, and Yangfeng Ji. 2014. Extracting lexically divergent paraphrases from Twitter. Transactions of the Association for Computational Linguistics, 2:435–448.
2018
42
Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Long Papers), pages 463–473 Melbourne, Australia, July 15 - 20, 2018. c⃝2018 Association for Computational Linguistics 463 Event2Mind: Commonsense Inference on Events, Intents, and Reactions Hannah Rashkin†⇤Maarten Sap†⇤Emily Allaway† Noah A. Smith† Yejin Choi†‡ †Paul G. Allen School of Computer Science & Engineering, University of Washington ‡Allen Institute for Artificial Intelligence {hrashkin,msap,eallaway,nasmith,yejin}@cs.washington.edu Abstract We investigate a new commonsense inference task: given an event described in a short free-form text (“X drinks coffee in the morning”), a system reasons about the likely intents (“X wants to stay awake”) and reactions (“X feels alert”) of the event’s participants. To support this study, we construct a new crowdsourced corpus of 25,000 event phrases covering a diverse range of everyday events and situations. We report baseline performance on this task, demonstrating that neural encoder-decoder models can successfully compose embedding representations of previously unseen events and reason about the likely intents and reactions of the event participants. In addition, we demonstrate how commonsense inference on people’s intents and reactions can help unveil the implicit gender inequality prevalent in modern movie scripts. 1 Introduction Understanding a narrative requires commonsense reasoning about the mental states of people in relation to events. For example, if “Alex is dragging his feet at work”, pragmatic implications about Alex’s intent are that “Alex wants to avoid doing things” (Figure 1). We can also infer that Alex’s emotional reaction might be feeling “lazy” or “bored”. Furthermore, while not explicitly mentioned, we can infer that people other than Alex are affected by the situation, and these people are likely to feel “frustrated” or “impatient”. This type of pragmatic inference can potentially be useful for a wide range of NLP applications ⇤These two authors contributed equally. PersonX drags PersonX's feet PersonX cooks thanksgiving dinner PersonX reads PersonY's diary to avoid doing things lazy, bored frustrated, impatient to impress their family tired, a sense of belonging impressed to be nosey, know secrets guilty, curious angry, violated, betrayed X's intent X's reaction Y's reaction X's intent X's reaction Y's reaction X's intent X's reaction Y's reaction Figure 1: Examples of commonsense inference on mental states of event participants. In the third example event, common sense tells us that Y is likely to feel betrayed as a result of X reading their diary. that require accurate anticipation of people’s intents and emotional reactions, even when they are not explicitly mentioned. For example, an ideal dialogue system should react in empathetic ways by reasoning about the human user’s mental state based on the events the user has experienced, without the user explicitly stating how they are feeling. Similarly, advertisement systems on social media should be able to reason about the emotional reactions of people after events such as mass shootings and remove ads for guns which might increase social distress (Goel and Isaac, 2016). Also, pragmatic inference is a necessary step toward automatic narrative understanding and generation (Tomai and Forbus, 2010; Ding and Riloff, 2016; Ding et al., 2017). However, this type of social commonsense reasoning goes far beyond the widely studied entailment tasks (Bowman et al., 2015; Dagan et al., 2006) and thus falls outside the scope of existing benchmarks. In this paper, we introduce a new task, corpus, 464 PersonX’s Intent Event Phrase PersonX’s Reaction Others’ Reactions to express anger to vent their frustration to get PersonY’s full attention PersonX starts to yell at PersonY mad frustrated annoyed shocked humiliated mad at PersonX to communicate something without being rude to let the other person think for themselves to be subtle PersonX drops a hint sly secretive frustrated oblivious surprised grateful to catch the criminal to be civilized justice PersonX reports to the police anxious worried nervous sad angry regret to wake up to feel more energized PersonX drinks a cup of coffee alert awake refreshed NONE to be feared to be taken seriously to exact revenge PersonX carries out PersonX’s threat angry dangerous satisfied sad afraid angry NONE It starts snowing NONE calm peaceful cold Table 1: Example annotations of intent and reactions for 6 event phrases. Each annotator could fill in up to three free-responses for each mental state. and model, supporting commonsense inference on events with a specific focus on modeling stereotypical intents and reactions of people, described in short free-form text. Our study is in a similar spirit to recent efforts of Ding and Riloff (2016) and Zhang et al. (2017), in that we aim to model aspects of commonsense inference via natural language descriptions. Our new contributions are: (1) a new corpus that supports commonsense inference about people’s intents and reactions over a diverse range of everyday events and situations, (2) inference about even those people who are not directly mentioned by the event phrase, and (3) a task formulation that aims to generate the textual descriptions of intents and reactions, instead of classifying their polarities or classifying the inference relations between two given textual descriptions. Our work establishes baseline performance on this new task, demonstrating that, given the phrase-level inference dataset, neural encoderdecoder models can successfully compose phrasal embeddings for previously unseen events and reason about the mental states of their participants. Furthermore, in order to showcase the practical implications of commonsense inference on events and people’s mental states, we apply our model to modern movie scripts, which provide a new insight into the gender bias in modern films beyond what previous studies have offered (England et al., 2011; Agarwal et al., 2015; Ramakrishna et al., 2017; Sap et al., 2017). The resulting corpus includes around 25,000 event phrases, which combine automatically extracted phrases from stories and blogs with all idiomatic verb phrases listed in the Wiktionary. Our corpus is publicly available.1 2 Dataset One goal of our investigation is to probe whether it is feasible to build computational models that can perform limited, but well-scoped commonsense inference on short free-form text, which we refer to as event phrases. While there has been much prior research on phrase-level paraphrases (Pavlick et al., 2015) and phrase-level entailment (Dagan et al., 2006), relatively little prior work focused on phrase-level inference that requires prag1https://tinyurl.com/event2mind 465 matic or commonsense interpretation. We scope our study to two distinct types of inference: given a phrase that describes an event, we want to reason about the likely intents and emotional reactions of people who caused or affected by the event. This complements prior work on more general commonsense inference (Speer and Havasi, 2012; Li et al., 2016; Zhang et al., 2017), by focusing on the causal relations between events and people’s mental states, which are not well covered by most existing resources. We collect a wide range of phrasal event descriptions from stories, blogs, and Wiktionary idioms. Compared to prior work on phrasal embeddings (Wieting et al., 2015; Pavlick et al., 2015), our work generalizes the phrases by introducing (typed) variables. In particular, we replace words that correspond to entity mentions or pronouns with typed variables such as PersonX or PersonY, as shown in examples in Table 1. More formally, the phrases we extract are a combination of a verb predicate with partially instantiated arguments. We keep specific arguments together with the predicate, if they appear frequently enough (e.g., PersonX eats pasta for dinner). Otherwise, the arguments are replaced with an untyped blank (e.g., PersonX eats for dinner). In our work, only person mentions are replaced with typed variables, leaving other types to future research. Inference types The first type of pragmatic inference is about intent. We define intent as an explanation of why the agent causes a volitional event to occur (or “none” if the event phrase was unintentional). The intent can be considered a mental pre-condition of an action or an event. For example, if the event phrase is PersonX takes a stab at , the annotated intent might be that “PersonX wants to solve a problem”. The second type of pragmatic inference is about emotional reaction. We define reaction as an explanation of how the mental states of the agent and other people involved in the event would change as a result. The reaction can be considered a mental post-condition of an action or an event. For example, if the event phrase is that PersonX gives PersonY as a gift, PersonX might “feel good about themselves” as a result, and PersonY might “feel grateful” or “feel thankful”. Source # Unique Events # Unique Verbs Average  ROC Story 13,627 639 0.57 G. N-grams 7,066 789 0.39 Spinn3r 2,130 388 0.41 Idioms 1,916 442 0.42 Total 24,716 1,333 0.45 Table 2: Data and annotation agreement statistics for our new phrasal inference corpus. Each event is annotated by three crowdworkers. 2.1 Event Extraction We extract phrasal events from three different corpora for broad coverage: the ROC Story training set (Mostafazadeh et al., 2016), the Google Syntactic N-grams (Goldberg and Orwant, 2013), and the Spinn3r corpus (Gordon and Swanson, 2008). We derive events from the set of verb phrases in our corpora, based on syntactic parses (Klein and Manning, 2003). We then replace the predicate subject and other entities with the typed variables (e.g., PersonX, PersonY), and selectively substitute verb arguments with blanks ( ). We use frequency thresholds to select events to annotate (for details, see Appendix A.1). Additionally, we supplement the list of events with all 2,000 verb idioms found in Wiktionary, in order to cover events that are less compositional.2 Our final annotation corpus contains nearly 25,000 event phrases, spanning over 1,300 unique verb predicates (Table 2). 2.2 Crowdsourcing We design an Amazon Mechanical Turk task to annotate the mental pre- and post-conditions of event phrases. A snippet of our MTurk HIT design is shown in Figure 2. For each phrase, we ask three annotators whether the agent of the event, PersonX, intentionally causes the event, and if so, to provide up to three possible textual descriptions of their intents. We then ask annotators to provide up to three possible reactions that PersonX might experience as a result. We also ask annotators to provide up to three possible reactions of other people, when applicable. These other people can be either explicitly mentioned (e.g., “PersonY” in PersonX punches PersonY’s lights out), or only implied 2We compiled the list of idiomatic verb phrases by crossreferencing between Wiktionary’s English idioms category and the Wiktionary English verbs categories. 466 Event PersonX punches PersonY's lights out 1. Does this event make sense enough for you to answer questions 2-5? (Or does it have too many meanings?) Yes, can answer No, can't answer or has too many meanings Before the event 2. Does PersonX willingly cause this event? Yes No a). Why? (Try to describe without reusing words from the event) Because PersonX wants ... to (be) [write a reason] [write another reason - optional] [write another reason - optional] Figure 2: Intent portion of our annotation task. We allow annotators to label events as invalid if the phrase is unintelligible. The full annotation setup is shown in Figure 8 in the appendix. (e.g., given the event description PersonX yells at the classroom, we can infer that other people such as “students” in the classroom may be affected by the act of PersonX). For quality control, we periodically removed workers with high disagreement rates, at our discretion. Coreference among Person variables With the typed Person variable setup, events involving multiple people can have multiple meanings depending on coreference interpretation (e.g., PersonX eats PersonY’s lunch has very different mental state implications from PersonX eats PersonX’s lunch). To prune the set of events that will be annotated for intent and reaction, we ran a preliminary annotation to filter out candidate events that have implausible coreferences. In this preliminary task, annotators were shown a combinatorial list of coreferences for an event (e.g., PersonX punches PersonX’s lights out, PersonX punches PersonY’s lights out) and were asked to select only the plausible ones (e.g., PersonX punches PersonY’s lights out). Each set of coreferences was annotated by 3 workers, yielding an overall agreement of =0.4. This annotation excluded 8,406 events with implausible coreference from our set (out of 17,806 events). 2.3 Mental State Descriptions Our dataset contains nearly 25,000 event phrases, with annotators rating 91% of our extracted events as “valid” (i.e., the event makes sense). Of those events, annotations for the multiple choice portions of the task (whether or not there exists intent/reaction) agree moderately, with an average Cohen’s = 0.45 (Table 2). The individual  scores generally indicate that turkers disagree half as often as if they were randomly selecting answers. Importantly, this level of agreement is acceptable in our task formulation for two reasons. First, unlike linguistic annotations on syntax or semantics where experts in the corresponding theory would generally agree on a single correct label, pragmatic interpretations may better be defined as distributions over multiple correct labels (e.g., after PersonX takes a test, PersonX might feel relieved and/or stressed; de Marneffe et al., 2012). Second, because we formulate our task as a conditional language modeling problem, where a distribution over the textual descriptions of intents and reactions is conditioned on the event description, this variation in the labels is only as expected. A majority of our events are annotated as willingly caused by the agent (86%, Cohen’s = 0.48), and 26% involve other people (= 0.41). Most event patterns in our data are fully instantiated, with only 22% containing blanks ( ). In our corpus, the intent annotations are slightly longer (3.4 words on average) than the reaction annotations (1.5 words). 3 Models Given an event phrase, our models aim to generate three entity-specific pragmatic inferences: PersonX’s intent, PersonX’s reaction, and others’ reactions. The general outline of our model architecture is illustrated in Figure 3. The input to our model is an event pattern described through free-form text with typed variables such as PersonX gives PersonY as a gift. For notation purposes, we describe each event pattern E as a sequence of word embeddings he1, e2, . . . , eni 2 Rn⇥D. This input is encoded as a vector hE 2 RH that will be used for predicting output. The output of the model is its hypotheses about PersonX’s intent, PersonX’s reaction, and others’ reactions (vi,vx, and vo, respectively). We experiment with representing the 467 PersonX’s Intent decoder vi: start, a, fight vx: powerful vo: defensive PersonX’s Reaction decoder Others’ Reaction decoder Pre-condition Post-condition Event2mind Encoder PersonX punches PersonY’s lights out E = e1…en f (e1…en) hE softmax(Wi hE+bi) softmax(Wx hE+bx) softmax(Wo hE+bo) Figure 3: Overview of the model architecture. From an encoded event, our model predicts intents and reactions in a multitask setting. output in two decoding set-ups: three vectors interpretable as discrete distributions over words and phrases (n-gram reranking) or three sequences of words (sequence decoding). Encoding events The input event phrase E is compressed into an H-dimensional embedding hE via an encoding function f : Rn⇥D ! RH: hE = f(e1, . . . , en) We experiment with several ways for defining f, inspired by standard techniques in sentence and phrase classification (Kim, 2014). First, we experiment with max-pooling and mean-pooling over the word vectors {ei}n i=1. We also consider a convolutional neural network (ConvNet; LeCun et al., 1998) taking the last layer of the network as the encoded version of the event. Lastly, we encode the event phrase with a bi-directional RNN (specifically, a GRU; Cho et al., 2014), concatenating the final hidden states of the forward and backward cells as the encoding: hE = [−! hn; − h1]. For hyperparameters and other details, we refer the reader to Appendix B. Though the event sequences are typically rather short (4.6 tokens on average), our model still benefits from the ConvNet and BiRNN’s ability to compose words. Pragmatic inference decoding We use three decoding modules that take the event phrase embedding hE and output distributions of possible PersonX’s intent (vi), PersonX’s reactions (vx), and others’ reactions (vo). We experiment with two different decoder set-ups. First, we experiment with n-gram re-ranking, considering the |V | most frequent {1, 2, 3}grams in our annotations. Each decoder projects the event phrase embedding hE into a |V |dimensional vector, which is then passed through a softmax function. For instance, the distribution over descriptions of PersonX’s intent is given by: vi = softmax(WihE + bi) Second, we experiment with sequence generation, using RNN decoders to generate the textual description. The event phrase embedding hE is set as the initial state hdec of three decoder RNNs (using GRU cells), which then output the intent/reactions one word at a time (using beam-search at test time). For example, an event’s intent sequence (vi = v(0) i v(1) i . . .) is computed as follows: v(t+1) i = softmax(Wi RNN(v(t) i , h(t) i,dec) + bi) Training objective We minimize the crossentropy between the predicted distribution over words and phrases, against the one actually observed in our dataset. Further, we employ multitask learning, simultaneously minimizing the loss for all three decoders at each iteration. Training details We fix our input embeddings, using 300-dimensional skip-gram word embeddings trained on Google News (Mikolov et al., 2013). For decoding, we consider a vocabulary of size |V | = 14,034 in the n-gram re-ranking setup. For the sequence decoding setup, we only consider the unigrams in V , yielding an output space of 7,110 at each time step. We randomly divided our set of 24,716 unique events (57,094 annotations) into a training/dev./test set using an 80/10/10% split. Some annotations have multiple responses (i.e., a crowdworker gave multiple possible intents and reactions), in which case we take each of the combinations of their responses as a separate training example. 4 Empirical Results Table 3 summarizes the performance of different encoding models on the dev and test set in terms of cross-entropy and recall at 10 predicted intents and reactions. As expected, we see a moderate improvement in recall and cross-entropy when using the more compositional encoder models (ConvNet and BiRNN; both n-gram and sequence de468 Development Test Encoding Function Decoder Average Cross-Ent Recall @10 (%) Average Cross-Ent Recall @10 (%) Intent XReact OReact Intent XReact OReact max-pool n-gram 5.75 31 35 68 5.14 31 37 67 mean-pool n-gram 4.82 35 39 69 4.94 34 40 68 ConvNet n-gram 4.85 36 42 69 4.81 37 44 69 BiRNN 300d n-gram 4.78 36 42 68 4.74 36 43 69 BiRNN 100d n-gram 4.76 36 41 68 4.73 37 43 68 mean-pool sequence 4.59 39 36 67 4.54 40 38 66 ConvNet sequence 4.44 42 39 68 4.40 43 40 67 BiRNN 100d sequence 4.25 39 38 67 4.22 40 40 67 Table 3: Average cross-entropy (lower is better) and recall @10 (percentage of times the gold falls within the top 10 decoded; higher is better) on development and test sets for different modeling variations. We show recall values for PersonX’s intent, PersonX’s reaction and others’ reaction (denoted as “Intent”, “XReact”, and “OReact”). Note that because of two different decoding setups, cross-entropy between n-gram and sequence decoding are not directly comparable. coding setups). Additionally, BiRNN models outperform ConvNets on cross-entropy in both decoding setups. Looking at the recall split across intent vs. reaction labels (“Intent”, “XReact” and “OReact” columns), we see that much of the improvement in using these two models is within the prediction of PersonX’s intents. Note that recall for “OReact” is much higher, since a majority of events do not involve other people. Human evaluation To further assess the quality of our models, we randomly select 100 events from our test set and ask crowd-workers to rate generated intents and reactions. We present 5 workers with an event’s top 10 most likely intents and reactions according to our model and ask them to select all those that make sense to them. We evaluate each model’s precision @10 by computing the average number of generated responses that make sense to annotators. Figure 4 summarizes the results of this evaluation. In most cases, the performance is higher for the sequential decoder than the corresponding n-gram decoder. The biggest gain from using sequence decoders is in intent prediction, possibly because intent explanations are more likely to be longer. The BiRNN and ConvNet encoders consistently have higher precision than the mean-pooling with the BiRNN-seq setup slightly outperforming other models. Unless otherwise specified, this is the model we employ in further sections. 0% 10% 20% 30% 40% 50% 60% Intent XReact OReact mean-pool ngram mean-pool seq ConvNet ngram ConvNet seq BiRNN ngram BiRNN seq Figure 4: Average precision @10 of each model’s top ten responses in the human evaluation. We show results for various encoder functions (meanpool, ConvNet, BiRNN-100d) combined with two decoding setups (n-gram re-ranking, sequence generation). Error analyses We test whether certain types of events are easier for predicting commonsense inference. In Figure 6, we show the difference in cross-entropy of the BiRNN 100d model on predicting different portions of the development set including: Blank events (events containing non-instantiated arguments), 2+ People events (events containing multiple different Person variables), and Idiom events (events coming from the Wiktionary idiom list). Our results show that, while intent prediction performance remains sim469 learn, get a job,
 learn a new skill,
 get better relax, 
 get somewhere, go home, get some exercise relax, go home,
 get somewhere,
 get home learn, graduate, learn a new skill,
 get a job learn, graduate, learn a new skill,
 learn more Intent PersonX’s Reaction Event1 Event2 satisfied,
 refreshed, accomplished, exhausted satisfied, healthy, sad,
 exhausted, relieved tired, sad, scared, pain, hurt refreshed,
 clean, accomplished, good clean, refreshed, satisfied, accomplished, wet Others’ Reaction angry, upset, sad, annoyed, hurt grateful, annoyed, angry, upset
 grateful, thankful,
 relieved, happy, satisfied hurt, angry, sad, upset, dead, scared hurt, angry, sad, upset, annoyed, scared PersonX
 takes PersonY to the emergency room PersonX goes
 to school PersonX
 washes
 PersonX’s
 legs PersonX
 punches
 PersonY’s
 face PersonX cuts
 PersonX’s
 legs PersonX
 comes home
 after school Figure 5: Sample predictions from homotopic embeddings (gradual interpolation between Event1 and Event2), selected from the top 10 beam elements decoded in the sequence generation setup. Examples highlight differences captured when ideas are similar (going to and coming from school), when only a single word differs (washes versus cuts), and when two events are unrelated. Recall @10 (%) Blanks 2+ People Idioms Full Dev 67 67 43 68 38 20 38 37 39 31 41 30 Intent XReact OReact Figure 6: Recall @ 10 (%) on different subsets of the development set for intents, PersonX’s reactions, and other people’s reactions, using the BiRNN 100d model. “Full dev” represents the recall on the entire development dataset. ilar for all three sets of events, it is 10% behind intent prediction on the full development set. Additionally, predicting other people’s reactions is more difficult for the model when other people are explicitly mentioned. Unsurprisingly, idioms are particularly difficult for commonsense inference, perhaps due to the difficulty in composing meaning over nonliteral or noncompositional event descriptions. To further evaluate the geometry of the embedding space, we analyze interpolations between pairs of event phrases (from outside the train set), similar to the homotopic analysis of Bowman et al. (2016). For a handful of event pairs, we decode intents, reactions for PersonX, and reactions for other people from points sampled at equal intervals on the interpolated line between two event phrases. We show examples in Figure 5. The embedding space distinguishes changes from generally positive to generally negative words and is also able to capture small differences between event phrases (such as “washes” versus “cuts”). 5 Analyzing Bias via Event2Mind Inference Through Event2Mind inference, we can attempt to bring to the surface what is implied about people’s behavior and mental states. We employ this inference to analyze implicit bias in modern films. As shown in Figure 7, our model is able to analyze character portrayal beyond what is explicit in text, by performing pragmatic inference on character actions to explain aspects of a character’s mental state. In this section, we use our model’s inference to shed light on gender differences in intents behind and reactions to characters’ actions. 5.1 Processing of Movie Scripts For our portrayal analyses, we use scene descriptions from 772 movie scripts released by Gorinski and Lapata (2015), assigned to over 21,000 characters as done by Sap et al. (2017). We extract events from the scene descriptions, and generate their 10 most probable intent and reaction sequences using our BiRNN sequence model (as in Figure 7). We then categorize generated intents and reactions into groups based on LIWC category scores of the generated output (Tausczik and Pennebaker, 2016).3 The intent and reaction categories are then 3We only consider content word categories: ‘Core Drives 470 Vivian sits on her bed, lost in thought. Her bags are packed, ... PersonX sits on PersonX's bed , lost in thought Reaction Juno laughs and hugs her father, planting a smooch on his cheek. PersonX hugs ___ , planting a smooch on PersonY's cheek Intent show affection show love loving none funny friendly nice express love worried sad upset embarrassed sick scared lonely bad Figure 7: Two scene description snippets from Juno (2007, top) and Pretty Woman (1990, bottom), augmented with Event2mind inferences on the characters’ intents and reactions. E.g., our model infers that the event PersonX sits on PersonX’s bed, lost in thought implies that the agent, Vivian, is sad or worried. aggregated for each character, and standardized (zero-mean and unit variance). We compute correlations with gender for each category of intent or reaction using a logistic regression model, testing significance while using Holm’s correction for multiple comparisons (Holm, 1979).4 To account for the gender skew in scene presence (29.4% of scenes have women), we statistically control for the total number of words in a character’s scene descriptions. Note that the original event phrases are all gender agnostic, as their participants have been replaced by variables (e.g., PersonX). We also find that the types of gender biases uncovered remain similar when we run these analyses on the human annotations or the generated words and phrases from the BiRNN with n-gram re-ranking decoding setup. and Needs’, ‘Personal Concerns’, ‘Biological Processes’, ‘Cognitive Processes’, ‘Social Words’, ‘Affect Words’, ‘Perceptual Processes’. We refer the reader to Tausczik and Pennebaker (2016) or http://liwc.wpengine.com/ compare-dictionaries/ for a complete list of category descriptions. 4Given the data limitation, we represent gender as a binary, but acknowledge that gender is a more complex social construct. 5.2 Revealing Implicit Bias via Explicit Intents and Reactions Female: intents AFFILIATION, FRIEND, FAMILY BODY, SEXUAL, INGEST SEE, INSIGHT, DISCREP Male: intents DEATH, HEALTH, ANGER, NEGEMO RISK, POWER, ACHIEVE, REWARD, WORK CAUSE, TENTATIVE‡ Female: reactions POSEMO, AFFILIATION, FRIEND, REWARD INGEST, SEXUAL‡, BODY‡ Male: reactions WORK, ACHIEVE, POWER, HEALTH† Female: others’ reactions POSEMO, AFFILIATION, FRIEND INGEST, SEE, INSIGHT Male: others’ reactions ACHIEVE, RISK† SAD, NEGEMO‡, ANGER† Table 4: Select LIWC categories correlated with gender. All results are significant when corrected for multiple comparisons at p < 0.001, except †p < 0.05 and ‡p < 0.01. Our Event2Mind inferences automate portrayal analyses that previously required manual annotations (Behm-Morawitz and Mastro, 2008; Prentice and Carranza, 2002; England et al., 2011). Shown in Table 4, our results indicate a gender bias in the behavior ascribed to characters, consistent with psychology and gender studies literature (Collins, 2011). Specifically, events with female semantic agents are intended to be helpful to other people (intents involving FRIEND, FAMILY, and AFFILIATION), particularly relating to eating and making food for themselves and others (INGEST, BODY). Events with male agents on the other hand are motivated by and resulting in achievements (ACHIEVE, MONEY, REWARDS, POWER). Women’s looks and sexuality are also emphasized, as their actions’ intents and reactions are sexual, seen, or felt (SEXUAL, SEE, PERCEPT). Men’s actions, on the other hand, are motivated by violence or fighting (DEATH, ANGER, RISK), with strong negative reactions (SAD, ANGER, NEGATIVE EMOTION). Our approach decodes nuanced implications 471 into more explicit statements, helping to identify and explain gender bias that is prevalent in modern literature and media. Specifically, our results indicate that modern movies have the bias to portray female characters as having pro-social attitudes, whereas male characters are portrayed as being competitive or pro-achievement. This is consistent with gender stereotypes that have been studied in movies in both NLP and psychology literature (Agarwal et al., 2015; Madaan et al., 2017; Prentice and Carranza, 2002; England et al., 2011). 6 Related Work Prior work has sought formal frameworks for inferring roles and other attributes in relation to events (Baker et al., 1998; Das et al., 2014; Schuler et al., 2009; Hartshorne et al., 2013, inter alia), implicitly connoted by events (Reisinger et al., 2015; White et al., 2016; Greene, 2007; Rashkin et al., 2016), or sentiment polarities of events (Ding and Riloff, 2016; Choi and Wiebe, 2014; Russo et al., 2015; Ding and Riloff, 2018). In addition, recent work has studied the patterns which evoke certain polarities (Reed et al., 2017), the desires which make events affective (Ding et al., 2017), the emotions caused by events (Vu et al., 2014), or, conversely, identifying events or reasoning behind particular emotions (Gui et al., 2017). Compared to this prior literature, our work uniquely learns to model intents and reactions over a diverse set of events, includes inference over event participants not explicitly mentioned in text, and formulates the task as predicting the textual descriptions of the implied commonsense instead of classifying various event attributes. Previous work in natural language inference has focused on linguistic entailment (Bowman et al., 2015; Bos and Markert, 2005) while ours focuses on commonsense-based inference. There also has been inference or entailment work that is more generation focused: generating, e.g., entailed statements (Zhang et al., 2017; Blouw and Eliasmith, 2018), explanations of causality (Kang et al., 2017), or paraphrases (Dong et al., 2017). Our work also aims at generating inferences from sentences; however, our models infer implicit information about mental states and causality, which has not been studied by most previous systems. Also related are commonsense knowledge bases (Espinosa and Lieberman, 2005; Speer and Havasi, 2012). Our work complements these existing resources by providing commonsense relations that are relatively less populated in previous work. For instance, ConceptNet contains only 25% of our events, and only 12% have relations that resemble intent and reaction. We present a more detailed comparison with ConceptNet in Appendix C. 7 Conclusion We introduced a new corpus, task, and model for performing commonsense inference on textuallydescribed everyday events, focusing on stereotypical intents and reactions of people involved in the events. Our corpus supports learning representations over a diverse range of events and reasoning about the likely intents and reactions of previously unseen events. We also demonstrate that such inference can help reveal implicit gender bias in movie scripts. Acknowledgments We thank the anonymous reviewers for their insightful comments. We also thank xlab members at the University of Washington, Martha Palmer, Tim O’Gorman, Susan Windisch Brown, Ghazaleh Kazeminejad as well as other members at the University of Colorado at Boulder for many helpful comments for our development of the annotation pipeline. This work was supported in part by National Science Foundation Graduate Research Fellowship Program under grant DGE-1256082, NSF grant IIS-1714566, and the DARPA CwC program through ARO (W911NF-15-1-0543). References Mart´ın Abadi, Ashish Agarwal, Paul Barham, Eugene Brevdo, Zhifeng Chen, Craig Citro, Greg S. Corrado, Andy Davis, Jeffrey Dean, Matthieu Devin, Sanjay Ghemawat, Ian Goodfellow, Andrew Harp, Geoffrey Irving, Michael Isard, Yangqing Jia, Rafal Jozefowicz, Lukasz Kaiser, Manjunath Kudlur, Josh Levenberg, Dandelion Man´e, Rajat Monga, Sherry Moore, Derek Murray, Chris Olah, Mike Schuster, Jonathon Shlens, Benoit Steiner, Ilya Sutskever, Kunal Talwar, Paul Tucker, Vincent Vanhoucke, Vijay Vasudevan, Fernanda Vi´egas, Oriol Vinyals, Pete Warden, Martin Wattenberg, Martin Wicke, Yuan Yu, and Xiaoqiang Zheng. 2015. TensorFlow: Large-scale machine learning on heterogeneous systems. Software available from tensorflow.org. Apoorv Agarwal, Jiehan Zheng, Shruti Kamath, Sriramkumar Balasubramanian, and Shirin Ann Dey. 2015. Key female characters in film have more to 472 talk about besides men: Automating the bechdel test. In NAACL, pages 830–840, Denver, Colorado. Association for Computational Linguistics. Collin F. Baker, Charles J. Fillmore, and John B. Lowe. 1998. The berkeley framenet project. In COLINGACL. Elizabeth Behm-Morawitz and Dana E Mastro. 2008. Mean girls? The influence of gender portrayals in teen movies on emerging adults’ gender-based attitudes and beliefs. Journalism & Mass Communication Quarterly, 85(1):131–146. Peter Blouw and Chris Eliasmith. 2018. Using neural networks to generate inferential roles for natural language. Frontiers in Psychology, 8:2335. Johan Bos and Katja Markert. 2005. Recognising textual entailment with robust logical inference. In MLCW. Samuel R. Bowman, Gabor Angeli, Christopher Potts, and Christopher D. Manning. 2015. A large annotated corpus for learning natural language inference. In EMNLP. Samuel R. Bowman, Luke Vilnis, Oriol Vinyals, Andrew M. Dai, Rafal J´ozefowicz, and Samy Bengio. 2016. Generating sentences from a continuous space. In CoNLL. Kyunghyun Cho, Bart van Merrienboer, Dzmitry Bahdanau, and Yoshua Bengio. 2014. On the properties of neural machine translation: Encoder-decoder approaches. In SSST@EMNLP. Yoonjung Choi and Janyce Wiebe. 2014. +/effectwordnet: Sense-level lexicon acquisition for opinion inference. In EMNLP. Rebecca L Collins. 2011. Content analysis of gender roles in media: Where are we now and where should we go? Sex Roles, 64(3-4):290–298. Ido Dagan, Oren Glickman, and Bernardo Magnini. 2006. The PASCAL recognising textual entailment challenge. In Machine Learning Challenges. Evaluating Predictive Uncertainty, Visual Object Classification, and Recognising Textual Entailment, pages 177–190. Springer. Dipanjan Das, Desai Chen, Andr´e F. T. Martins, Nathan Schneider, and Noah A. Smith. 2014. Frame-semantic parsing. Computational Linguistics, 40(1):9–56. Haibo Ding, Tianyu Jiang, and Ellen Riloff. 2017. Why is an event affective? Classifying affective events based on human needs. In AAAI Workshop on Affective Content Analysis. Haibo Ding and Ellen Riloff. 2016. Acquiring knowledge of affective events from blogs using label propagation. In AAAI. Haibo Ding and Ellen Riloff. 2018. Weakly supervised induction of affective events by optimizing semantic consistency. In AAAI. Li Dong, Jonathan Mallinson, Siva Reddy, and Mirella Lapata. 2017. Learning to paraphrase for question answering. In EMNLP. Dawn Elizabeth England, Lara Descartes, and Melissa A Collier-Meek. 2011. Gender role portrayal and the Disney princesses. Sex roles, 64(78):555–567. Jos´e H. Espinosa and Henry Lieberman. 2005. Eventnet: Inferring temporal relations between commonsense events. In MICAI. Vindu Goel and Mike Isaac. 2016. Facebook Moves to Ban Private Gun Sales on its Site and Instagram. https://www. nytimes.com/2016/01/30/technology/ facebook-gun-sales-ban.html. Accessed: 2018-02-19. Yoav Goldberg and Jon Orwant. 2013. A dataset of syntactic-ngrams over time from a very large corpus of english books. In SEM2013. Andrew S Gordon and Reid Swanson. 2008. StoryUpgrade: finding stories in internet weblogs. In ICWSM. Philip John Gorinski and Mirella Lapata. 2015. Movie script summarization as graph-based scene extraction. In NAACL, pages 1066–1076. Stephan Charles Greene. 2007. Spin: Lexical semantics, transitivity, and the identification of implicit sentiment. Lin Gui, Jiannan Hu, Yulan He, Ruifeng Xu, Qin Lu, and Jiachen Du. 2017. A question answering approach for emotion cause extraction. In EMNLP. Joshua K. Hartshorne, Claire Bonial, and Martha Palmer. 2013. The verbcorner project: Toward an empirically-based semantic decomposition of verbs. In EMNLP. Sture Holm. 1979. A simple sequentially rejective multiple test procedure. Scandinavian journal of statistics, pages 65–70. Dongyeop Kang, Varun Gangal, Ang Lu, Zheng Chen, and Eduard H. Hovy. 2017. Detecting and explaining causes from text for a time series event. In EMNLP. Yoon Kim. 2014. Convolutional neural networks for sentence classification. In EMNLP. Dan Klein and Christopher D Manning. 2003. Accurate unlexicalized parsing. In ACL. Yann LeCun, L´eon Bottou, Yoshua Bengio, and Patrick Haffner. 1998. Gradient-based learning applied to document recognition. Proceedings of the IEEE, 86(11):2278–2324. 473 Xiang Li, Aynaz Taheri, Lifu Tu, and Kevin Gimpel. 2016. Commonsense knowledge base completion. In ACL. Nishtha Madaan, Sameep Mehta, Taneea S. Agrawaal, Vrinda Malhotra, Aditi Aggarwal, and Mayank Saxena. 2017. Analyzing gender stereotyping in bollywood movies. CoRR, abs/1710.04117. Marie-Catherine de Marneffe, Christopher D. Manning, and Christopher Potts. 2012. Did it happen? the pragmatic complexity of veridicality assessment. Computational Linguistics, 38:301–333. Tomas Mikolov, Kai Chen, Gregory S. Corrado, and Jeffrey Dean. 2013. Efficient estimation of word representations in vector space. CoRR, abs/1301.3781. Nasrin Mostafazadeh, Nathanael Chambers, Xiaodong He, Devi Parikh, Dhruv Batra, Lucy Vanderwende, Pushmeet Kohli, and James Allen. 2016. A corpus and cloze evaluation for deeper understanding of commonsense stories. In NAACL. Ellie Pavlick, Pushpendre Rastogi, Juri Ganitkevitch, Benjamin Van Durme, and Chris Callison-Burch. 2015. PPDB 2.0: Better paraphrase ranking, finegrained entailment relations, word embeddings, and style classification. In ACL. Deborah A Prentice and Erica Carranza. 2002. What women and men should be, shouldn’t be, are allowed to be, and don’t have to be: The contents of prescriptive gender stereotypes. Psychology of women quarterly, 26(4):269–281. Anil Ramakrishna, Victor R Mart´ınez, Nikolaos Malandrakis, Karan Singla, and Shrikanth Narayanan. 2017. Linguistic analysis of differences in portrayal of movie characters. In ACL, pages 1669–1678, Stroudsburg, PA, USA. Association for Computational Linguistics. Hannah Rashkin, Sameer Singh, and Yejin Choi. 2016. Connotation frames: A data-driven investigation. In ACL. Lena Reed, JiaQi Wu, Shereen Oraby, Pranav Anand, and Marilyn A. Walker. 2017. Learning lexicofunctional patterns for first-person affect. In ACL. Drew Reisinger, Rachel Rudinger, Francis Ferraro, Craig Harman, Kyle Rawlins, and Benjamin Van Durme. 2015. Semantic proto-roles. TACL, 3:475– 488. Irene Russo, Tommaso Caselli, and Carlo Strapparava. 2015. Semeval-2015 task 9: Clipeval implicit polarity of events. In SemEval@NAACL-HLT. Maarten Sap, Marcella Cindy Prasetio, Ari Holtzman, Hannah Rashkin, and Yejin Choi. 2017. Connotation frames of power and agency in modern films. In EMNLP, pages 2329–2334. Karin Kipper Schuler, Anna Korhonen, and Susan Windisch Brown. 2009. Verbnet overview, extensions, mappings and applications. In NAACL. Robert Speer and Catherine Havasi. 2012. Representing general relational knowledge in conceptnet 5. In LREC. Yla R Tausczik and James W Pennebaker. 2016. The psychological meaning of words: LIWC and computerized text analysis methods. J. Lang. Soc. Psychol. Emmett Tomai and Ken Forbus. 2010. Using narrative functions as a heuristic for relevance in story understanding. In Proceedings of the Intelligent Narrative Technologies III Workshop, page 9. ACM. Hoa Trong Vu, Graham Neubig, Sakriani Sakti, Tomoki Toda, and Satoshi Nakamura. 2014. Acquiring a dictionary of emotion-provoking events. In EACL. Aaron Steven White, Drew Reisinger, Keisuke Sakaguchi, Tim Vieira, Sheng Zhang, Rachel Rudinger, Kyle Rawlins, and Benjamin Van Durme. 2016. Universal decompositional semantics on universal dependencies. In EMNLP. John Wieting, Mohit Bansal, Kevin Gimpel, and Karen Livescu. 2015. From paraphrase database to compositional paraphrase model and back. TACL, 3:345– 358. Sheng Zhang, Rachel Rudinger, Kevin Duh, and Benjamin Van Durme. 2017. Ordinal common-sense inference. TACL, 5:379–395.
2018
43
Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Long Papers), pages 474–484 Melbourne, Australia, July 15 - 20, 2018. c⃝2018 Association for Computational Linguistics 474 Neural Adversarial Training for Semi-supervised Japanese Predicate-argument Structure Analysis Shuhei Kurita†‡ Daisuke Kawahara†‡ †Graduate School of Informatics, Kyoto University ‡CREST, JST {kurita, dk, kuro}@nlp.ist.i.kyoto-u.ac.jp Sadao Kurohashi†‡ Abstract Japanese predicate-argument structure (PAS) analysis involves zero anaphora resolution, which is notoriously difficult. To improve the performance of Japanese PAS analysis, it is straightforward to increase the size of corpora annotated with PAS. However, since it is prohibitively expensive, it is promising to take advantage of a large amount of raw corpora. In this paper, we propose a novel Japanese PAS analysis model based on semi-supervised adversarial training with a raw corpus. In our experiments, our model outperforms existing state-of-the-art models for Japanese PAS analysis. 1 Introduction In pro-drop languages, such as Japanese and Chinese, pronouns are frequently omitted when they are inferable from their contexts and background knowledge. The natural language processing (NLP) task for detecting such omitted pronouns and searching for their antecedents is called zero anaphora resolution. This task is essential for downstream NLP tasks, such as information extraction and summarization. For Japanese, zero anaphora resolution is usually conducted within predicate-argument structure (PAS) analysis as a task of finding an omitted argument for a predicate. PAS analysis is a task to find an argument for each case of a predicate. For Japanese PAS analysis, the ga (nominative, NOM), wo (accusative, ACC) and ni (dative, DAT) cases are generally handled. To develop models for Japanese PAS analysis, supervised learning methods using annotated corpora have been applied on the basis of morpho-syntactic clues. However, omitted pronouns have few clues and thus these models try to learn relations between a predicate and its (omitted) argument from the annotated corpora. The annotated corpora consist of several tens of thousands sentences, and it is difficult to learn predicate-argument relations or selectional preferences from such small-scale corpora. The state-of-the-art models for Japanese PAS analysis achieve an accuracy of around 50% for zero pronouns (Ouchi et al., 2015; Shibata et al., 2016; Iida et al., 2016; Ouchi et al., 2017; Matsubayashi and Inui, 2017). A promising way to solve this data scarcity problem is enhancing models with a large amount of raw corpora. There are two major approaches to using raw corpora: extracting knowledge from raw corpora beforehand (Sasano and Kurohashi, 2011; Shibata et al., 2016) and using raw corpora for data augmentation (Liu et al., 2017b). In traditional studies on Japanese PAS analysis, selectional preferences are extracted from raw corpora beforehand and are used in PAS analysis models. For example, Sasano and Kurohashi (2011) propose a supervised model for Japanese PAS analysis based on case frames, which are automatically acquired from a raw corpus by clustering predicate-argument structures. However, case frames are not based on distributed representations of words and have a data sparseness problem even if a large raw corpus is employed. Some recent approaches to Japanese PAS analysis combines neural network models with knowledge extraction from raw corpora. Shibata et al. (2016) extract selectional preferences by an unsupervised method that is similar to negative sampling (Mikolov et al., 2013). They then use the pre-extracted selectional preferences as one of the features to their PAS analysis model. The PAS analysis model is trained by a supervised method and the selectional preference representations are fixed during training. Us475 Predicate NOM ACC DAT (1) タクシーがNOM 客をACC 駅にDAT 送った。 送った タクシー 客 駅 i k e u k a y k i h s u k a t a tt u k o . a tt u k o i n -i k e o w u k a y k a g -i h s u k a t n oit a t s r e g n e s s a p i x a t d eir r a c / t n e s . n oit a t s e h t o t s r e g n e s s a p d eir r a c i x a t A (2) その列車は荷物をACC 運んだ。 運んだ 列車 荷物 u s t o m i n a h s s e r a d n o k a h a. d n o k a h o w u s t o m i n a w a h s s e r o n o s NULL e g a g g a b n ia r t d eir r a c .s e g a g g a b d eir r a c o sla n ia r t e h T (3) タクシーがNOM 客をACC 乗せたとき事故にDAT 巻き込まれた。 乗せた タクシー 客 takushi-ga kyaku-wo noseta toki jiko-ni makikomareta. noseta takushi kyaku NULL When the taxi picked up passengers, it was involved in the accident. picked up taxi passenger 巻き込まれた タクシー 事故 makikomareta takushi NULL jiko was involved taxi accident (4) この列車には乗れません。 乗れません あなた 列車 a t a n a n e s a m e r o n n. e s a m e r o n a w -i n a h s s e r o n o k NULL ressha n ia r t u o y e k a t t o n n a c . n ia r t si h t e k a t t o n n a c u o Y Table 1: Examples of Japanese sentences and their PAS analysis. In sentence (1), case markers ( が(ga), を(wo), and に(ni) ) correspond to NOM, ACC, and DAT. In example (2), the correct case marker is hidden by the topic marker は(wa). In sentence (3), the NOM argument of the second predicate 巻き込まれた(was involved), is dropped. NULL indicates that the predicate does not have the corresponding case argument or that the case argument is not written in the sentence. ing pre-trained external knowledge in the form of word embeddings has also been ubiquitous. However, such external knowledge is overwritten in the task-specific training. The other approach to using raw corpora for PAS analysis is data augmentation. Liu et al. (2017b) generate pseudo training data from a raw corpus and use them for their zero pronoun resolution model. They generate the pseudo training data by dropping certain words or pronouns in a raw corpus and assuming them as correct antecedents. After generating the pseudo training data, they rely on ordinary supervised training based on neural networks. In this paper, we propose a neural semisupervised model for Japanese PAS analysis. We adopt neural adversarial training to directly exploit the advantage of using a raw corpus. Our model consists of two neural network models: a generator model of Japanese PAS analysis and a so-called “validator” model of the generator prediction. The generator neural network is a model that predicts probabilities of candidate arguments of each predicate using RNN-based features and a head-selection model (Zhang et al., 2017). The validator neural network gets inputs from the generator and scores them. This validator can score the generator prediction even when PAS gold labels are not available. We apply supervised learning to the generator and unsupervised learning to the entire network using a raw corpus. Our contributions are summarized as follows: (1) a novel adversarial training model for PAS analysis; (2) learning from a raw corpus as a source of external knowledge; and (3) as a result, we achieve state-of-the-art performance on Japanese PAS analysis. 2 Task Description Japanese PAS analysis determines essential case roles of words for each predicate: who did what to whom. In many languages, such as English, case roles are mainly determined by word order. However, in Japanese, word order is highly flexible. In Japanese, major case roles are the nominative case (NOM), the accusative case (ACC) and the dative case (DAT), which roughly correspond to Japanese surface case markers: が(ga), を(wo), and に(ni). These case markers are often hidden by topic markers, and case arguments are also often omitted. We explain two detailed tasks of PAS analysis: case analysis and zero anaphora resolution. In Table 1, we show four example Japanese sentences and their PAS labels. PAS labels are attached to nominative, accusative and dative cases of each predicate. Sentence (1) has surface case markers that correspond to argument cases. Sentence (2) is an example sentence for case analysis. Case analysis is a task to find hidden case markers of arguments that have direct depen476 ( j-th predicate ) NOM: Raw Labeled ACC: DAT: v′(arg1) v′(arg2) v′(arg3) . . . v′(arg1) v′(arg2) v′(arg3) . . . v′(arg1) v′(arg2) v′(arg3) . . . Attention mechanism to h′DAT predj h′ACC predj h′NOM predj h′ predj FNN of 1 (xl, yl) xul Generator PAS Generator Training using xl xul validator embeddings v′( ) * s ′DAT predj s ′ACC predj s ′NOM predj Error Raw Corpus q(G(xl), yl) G(x) V (x) Corpus Corpus Validator Figure 1: The overall model of adversarial training with a raw corpus. The PAS generator G(x) and validator V (x). The validator takes inputs from the generator as a form of the attention mechanism. The validator itself is a simple feed-forward network with inputs of j-th predicate and its argument representations: {h′ predj, h′casek predj }. The validator returns scores for three cases and they are used for both the supervised training of the validator and the unsupervised training of the generator. The supervised training of the generator is not included in this figure. dencies to their predicates. Sentence (2) does not have the nominative case marker が(ga). It is hidden by the topic case marker は(wa). Therefore, a case analysis model has to find the correct NOM case argument 列車(train). Sentence (3) is an example sentence for zero anaphora resolution. Zero anaphora resolution is a task to find arguments that do not have direct dependencies to their predicates. At the second predicate “巻き込まれた”(was involved), the correct nominative argument is “タクシー”(taxi), while this does not have direct dependencies to the second predicate. A zero anaphora resolution model has to find “タクシー”(taxi) from the sentence, and assign it to the NOM case of the second predicate. In the zero anaphora resolution task, some correct arguments are not specified in the article. This is called as exophora. We consider “author” and “reader” arguments as exophora (Hangyo et al., 2013). They are frequently dropped from Japanese natural sentences. Sentence (4) is an example of dropped nominative arguments. In this sentence, the nominative argument is “あなた” (you), but “あ なた” (you) does not appear in the sentence. This is also included in zero anaphora resolution. Except these special arguments of exophora, we focus on intra-sentential anaphora resolution in the same way as (Shibata et al., 2016; Iida et al., 2016; Ouchi et al., 2017; Matsubayashi and Inui, 2017). We also attach NULL labels to cases that predicates do not have. 3 Model 3.1 Generative Adversarial Networks Generative adversarial networks are originally proposed in image generation tasks (Goodfellow et al., 2014; Salimans et al., 2016; Springenberg, 2015). In the original model in Goodfellow et al. (2014), they propose a generator G and a discriminator D. The discriminator D is trained to devide the real data distribution pdata(x) and images generated from the noise samples z(i) ∈Dz from noise prior p(z). The discriminator loss is LD = − Ex∼pdata(x)[log D(x)] +Ez∼pz(z)[log(1 −D(G(z)))]  , (1) and they train the discriminator by minimizing this loss while fixing the generator G. Similarly, the generator G is trained through minimizing LG = 1 |Dz| X i h log  1 −D(G(z(i)))  i , (2) while fixing the discriminator D. By doing this, the discriminator tries to descriminate the generated images from real images, while the generator tries to generate images that can deceive the adversarial discriminator. This training scheme is applied for many generative tasks including sentence generation (Subramanian et al., 2017), machine translation (Britz et al., 2017), dialog generation (Li et al., 2017), and text classification (Liu et al., 2017a). 477 3.2 Proposed Adversarial Training Using Raw Corpus Japanese PAS analysis and many other syntactic analyses in NLP are not purely generative, and we can make use of a raw corpus instead of the numerical noise distribution p(z). In this work, we use an adversarial training method using a raw corpus, combined with ordinary supervised learning using an annotated corpus. Let xl ∈Dl indicate labeled data and p(xl) indicate their label distribution. We also use unlabeled data xul ∈Dul later. Our generator G can be trained by the cross entropy loss with labeled data: LG/SL = −Exl,y∼p(xl)  log G(xl)  . (3) Supervised training of the generator works by minimizing this loss. Note that we follow the notations of Subramanian et al. (2017) in this subsection. In addition, we train a so-called validator against the generator errors. We use the term “validator” instead of “discriminator” for our adversarial training. Unlike the discriminator that is used for dividing generated images and real images, our validator is used to score the generator results. Assume that yl is the true labels and G(xl) is the predicted label distribution of data xl from the generator. We define the labels of the generator errors as: q(G(xl), yl) = δarg max[G(xl)], yl , (4) where δi,j = 1 only if i = j, otherwise δi,j = 0. This means that q is equal to 1 if the argument that the generator predicts is correct, otherwise 0. We use this generator error for training labels of the following validator. The inputs of the validator are both the generator outputs G(x) and data x ∈ D. The validator can be written as V (G(x)). The validator V is trained with labeled data xl by LV/SL = −Exl,y∼q(G(xl),yl)  log V (G(xl))  , (5) while fixing the generator G. This equation means that the validator is trained with labels of the generator error q(G(xl), yl). Once the validator is trained, we train the generator with an unsupervised method. The generator G is trained with unlabeled data xul ∈Dul by minimizing the loss LG/UL = − 1 |Dul| X i  log V (G(x(i) ul ))  , (6) while fixing the validator V . This generator training loss using the validator can be explained as follows. The generator tries to increase the validator scores to 1, while the validator is fixed. If the validator is well-trained, it returns scores close to 1 for correct PAS labels that the generator outputs, and 0 for wrong labels. Therefore, in Equation (6), the generator tries to predict correct labels in order to increase the scores of fixed validator. Note that the validator has a sigmoid function for the output of scores. Therefore output scores of the validator are in [0, 1]. We first conduct the supervised training of generator network with Equation (3). After this, following Goodfellow et al. (2014), we use k-steps of the validator training and one-step of the generator training. We also alternately conduct l-steps of supervised training of the generator. The entire loss function of this adversarial training is L = LG/SL + LV/SL + LG/UL . (7) Our contribution is that we propose the validator and train it against the generator errors, instead of discriminating generated data from real data. Salimans et al. (2016) explore the semi-supervised learning using adversarial training for K-classes image classification tasks. They add a new class of images that are generated by the generator and classify them. Miyato et al. (2016) propose virtual adversarial training for semi-supervised learning. They exploit unlabeled data for continuous smoothing of data distributions based on the adversarial perturbation of Goodfellow et al. (2015). These studies, however, do not use the counterpart neural networks for learning structures of unlabeled data. In our Japanese PAS analysis model, the generator corresponds to the head-selection-based neural network for Japanese anaphora resolution. Figure 1 shows the entire model. The labeled data correspond to the annotated corpora and the labels correspond to the PAS argument labels. The unlabeled data correspond to raw corpora. We explain the details of the generator and the validator neural networks in Sec.3.3 and Sec.3.4 in turn. 3.3 Generator of PAS Analysis The generator predicts the probabilities of arguments for each of the NOM, ACC and DAT cases of a predicate. As shown in Figure 2, the generator consists of a sentence encoder and an argument selection model. In the sentence encoder, we 478 Bi-LSTM Bi-LSTM Bi-LSTM Bi-LSTM Bi-LSTM Bi-LSTM Bi-LSTM Bi-LSTM embedding embedding embedding embedding この 列車 に 乗れません harg1 1 harg1 hpred1 W casek 1 W casek 2 scasek arg1,predj scasek arg2,predj hpredj hpath j scasek arg3,predj softmax harg2hpredj harg3hpredj 2 hpath j 3 hpath j Argument Selection Model Sentence Encoder Model pcasek arg1,predj pcasek arg2,predj pcasek arg3,predj ( j-th predicate, k-th case ) kono ressha ni noremasen Bi-LSTM Bi-LSTM embedding は wa this train can not take W casek 1 W casek 2 W casek 1 W casek 2 Figure 2: The generator of PAS. The sentence encoder is a three-layer bi-LSTM to compute the distributed representations of a predicate and its arguments: hpredi and hargi. The argument selection model is two-layer feedforward neural networks to compute the scores, scasek argi,predj, of candidate arguments for each case of a predicate. use a three-layer bidirectional-LSTM (bi-LSTM) to read the whole sentence and extract both global and local features as distributed representations. The argument selection model consists of a twolayer feedforward neural network (FNN) and a softmax function. For the sentence encoder, inputs are given as a sequence of embeddings v(x), each of which consist of word x, its inflection from, POS and detailed POS. They are concatenated and fed into the bi-LSTM layers. The bi-LSTM layers read these embeddings in forward and backward order and outputs the distributed representations of a predicate and a candidate argument: hpredj and hargi. Note that we also use the exophora entities, i.e., an author and a reader, as argument candidates. Therefore, we use specific embeddings for them. These embeddings are not generated by the biLSTM layers but are directly used in the argument selection model. We also use path embeddings to capture a dependency relation between a predicate and its candidate argument as used in Roth and Lapata (2016). Although Roth and Lapata (2016) use a one-way LSTM layer to represent the dependency path from a predicate to its potential argument, we use a bi-LSTM layer for this purpose. We feed the embeddings of words and POS tags to the bi-LSTM layer. In this way, the resulting path embedding represents both predicate-toargument and argument-to-predicate paths. We concatenate the bidirectional path embeddings to generate hpathij, which represents the dependency relation between the predicate j and its candidate argument i. For the argument selection model, we apply the argument selection model (Zhang et al., 2017) to evaluate the relation between a predicate and its potential argument for each argument case. In the argument selection model, a single FNN is repeatedly used to calculate scores for a child word and its head candidate word, and then a softmax function calculates normalized probabilities of candidate heads. We use three different FNNs that correspond to the NOM, ACC and DAT cases. These three FNNs have the same inputs of the distributed representations of j-th predicate hpredj, i-th candidate argument hargi and path embedding hpathij between the predicate j and candidate argument i. The FNNs for NOM, ACC and DAT compute the argument scores scasek argi,predj, where casek ∈ {NOM, ACC, DAT}. Finally, the softmax function computes the probability p(argi|predj,casek) of candidate argument i for case k of j-th predicate as: p(argi|predj,casek) = exp  scasek argi,predj  X argi exp  scasek argi,predj . (8) Our argument selection model is similar to the neural network structure of Matsubayashi and Inui (2017). However, Matsubayashi and Inui (2017) does not use RNNs to read the whole sentence. Their model is also designed to choose a case label for a pair of a predicate and its argument candidate. In other words, their model can assign the same case label to multiple arguments by itself, while our model does not. Since case arguments are almost unique for each case of a predicate in Japanese, Matsubayashi and Inui (2017) select the argument that has the highest probability for each case, even though probabilities of case arguments are not normalized over argument candidates. The 479 model of Ouchi et al. (2017) has the same problem. 3.4 Validator We exploit a validator to train the generator using a raw corpus. It consists of a two-layer FNN to which embeddings of a predicate and its arguments are fed. For predicate j, the input of the FNN is the representations of the predicate h′ predj and three arguments n h′ NOM predj , h′ ACC predj , h′ DAT predj o that are inferred by the generator. The two-layer FNN outputs three values, and then three sigmoid functions compute the scores of scalar values in a range of [0, 1] for the NOM, ACC and DAT cases: n s′ NOM predj , s′ ACC predj , s′ DAT predj o . These scores are the outputs of the validator D(x). We use dropout of 0.5 at the FNN input and hidden layer. The generator and validator networks are coupled by the attention mechanism, or the weighted sum of the validator embeddings. As shown in Equation (8), we compute a probability distribution of candidate arguments. We use the weighted sum of embeddings v′(x) of candidate arguments to compute the input representations of the validator: h′ casek predj = Ex∼p(argi)[v′(x)] = X argi p(argi|predj,casek)v′(argi). This summation is taken over candidate arguments in the sentence and the exophora entities. Note that we use embeddings v′(x) for the validator that are different from the embeddings v(x) for the generator, in order to separate the computation graphs of the generator and the validator neural networks except the joint part. We use this weighted sum by the softmax outputs instead of the argmax function. This allows the backpropagation through this joint. We also feed the embedding of a predicate to the validator: h′ predj = v′(predj). (9) Note that the validator is a simple neural network compared with the generator. The validator has limited inputs of predicates and arguments and no inputs of other words in sentences. This allows the generator to overwhelm the validator during the adversarial training. Type Value Size of hidden layers of FNNs 1,000 Size of Bi-LSTMs 256 Dim. of word embedding 100 Dim. of POS, detailed POS, inflection form tags 10, 10, 9 Minibatch size for the generator and validator 16, 1 Table 2: Parameters for neural network structure and training. KWDLC # snt # of dep # of zero Train 11,558 9,227 8,216 Dev. 1,585 1,253 821 Test 2,195 1,780 1,669 Table 3: KWDLC data statistics. 3.5 Implementation Details The neural networks are trained using backpropagation. The backpropagation has been done to the word and POS tags. We use Adam (Kingma and Ba, 2015) at the initial training of the generator network for the gradient learning rule. In adversarial learning, Adagrad (Duchi et al., 2010) is suitable because of the stability of learning. We use pre-trained word embeddings from 100M sentences from Japanese web corpus by word2vec (Mikolov et al., 2013). Other embeddings and hidden weights of neural networks are randomly initialized. For adversarial training, we first train the generator for two epochs by the supervised method, and train the validator while fixing the generator for another epoch. This is because the validator training preceding the generator training makes the validator result worse. After this, we alternately do the unsupervised training of the generator (LG/UL), k-times of supervised training of the validator (LV/SL) and l-times of supervised training of the generator (LG/SL). We use the N(LG/UL)/N(LG/SL) = 1/4 and N(LV/SL)/N(LG/SL) = 1/4, where N(·) indicates the number of sentences used for training. Also we use minibatch of 16 sentences for both supervised and unsupervised training of the generator, while we do not use minibatch for validator training. Therefore, we use k = 16 and l = 4. Other parameters are summarized in Table 2. 480 KWDLC NOM ACC DAT # of dep 7,224 1,555 448 # of zero 6,453 515 1,248 Table 4: KWDLC training data statistics for each case. Case Zero Ouchi+ 2015 76.5 42.1 Shibata+ 2016 89.3 53.4 Gen 91.5 56.2 Gen+Adv 92.0‡ 58.4‡ Table 5: The results of case analysis (Case) and zero anaphora resolution (Zero). We use Fmeasure as an evaluation measure. ‡ denotes that the improvement is statistically significant at p < 0.05, compared with Gen using paired t-test. 4 Experiments 4.1 Experimental Settings Following Shibata et al. (2016), we use the KWDLC (Kyoto University Web Document Leads Corpus) corpus (Hangyo et al., 2012) for our experiments.1 This corpus contains various Web documents, such as news articles, personal blogs, and commerce sites. In KWDLC, lead three sentences of each document are annotated with PAS structures including zero pronouns. For a raw corpus, we use a Japanese web corpus created by Hangyo et al. (2012), which has no duplicated sentences with KWDLC. This raw corpus is automatically parsed by the Japanese dependency parser KNP. We focus on intra-sentential anaphora resolution, and so we apply a preprocess to KWDLC. We regard the anaphors whose antecedents are in the preceding sentences as NULL in the same way as Ouchi et al. (2015); Shibata et al. (2016). Tables 3 and 4 list the statistics of KWDLC. We use the exophora entities, i.e., an author and a reader, following the annotations in KWDLC. We also assign author/reader labels to the following expressions in the same way as Hangyo et al. (2013); Shibata et al. (2016): author “私” (I), “僕” (I), “我々” (we), “弊社” (our company) 1 The KWDLC corpus is available at http://nlp. ist.i.kyoto-u.ac.jp/EN/index.php?KWDLC reader “あなた” (you), “君” (you), “客” (customer), “皆様” (you all) Following Ouchi et al. (2015) and Shibata et al. (2016), we conduct two kinds of analysis: (1) case analysis and (2) zero anaphora resolution. Case analysis is the task to determine the correct case labels when predicates and their arguments have direct dependencies but their case markers are hidden by surface markers, such as topic markers. Zero anaphora resolution is a task to find certain case arguments that do not have direct dependencies to their predicates in the sentence. Following Shibata et al. (2016), we exclude predicates that the same arguments are filled in multiple cases of a predicate. This is relatively uncommon and 1.5 % of the whole corpus are excluded. Predicates are marked in the gold dependency parses. Candidate arguments are just other tokens than predicates. This setting is also the same as Shibata et al. (2016). All performances are evaluated with microaveraged F-measure (Shibata et al., 2016). 4.2 Experimental Results We compare two models: the supervised generator model (Gen) and the proposed semi-supervised model with adversarial training (Gen+Adv). We also compare our models with two previous models: Ouchi et al. (2015) and Shibata et al. (2016), whose performance on the KWDLC corpus is reported. Table 5 lists the experimental results. Our models (Gen and Gen+Adv) outperformed the previous models. Furthermore, the proposed model with adversarial training (Gen+Adv) was significantly better than the supervised model (Gen). 4.3 Comparison with Data Augmentation Model We also compare our GAN-based approach with data augmentation techniques. A data augmentation approach is used in Liu et al. (2017b). They automatically process raw corpora and make drops of words with some rules. However, it is difficult to directly apply their approach to Japanese PAS analysis because Japanese zero-pronoun depends on dependency trees. If we make some drops of arguments of predicates in sentences, this can cause lacks of nodes in dependency trees. If we prune some branches of dependency trees of the sentence, this cause the data bias problem. 481 Case analysis Zero anaphora resolution Model NOM ACC DAT NOM ACC DAT Ouchi+ 2015 87.4 40.2 27.6 48.8 0.0 10.7 Shibata+ 2016 94.1 75.6 30.0 57.7 17.3 37.8 Gen 95.3 83.6 39.7 60.7 30.4 41.2 Gen+Adv 95.3 85.4 51.5 62.3 31.1 44.6 Table 6: The detailed results of case analysis and zero anaphora resolution for the NOM, ACC and DAT cases. Our models outperform the existing models in all cases. All values are evaluated with F-measure. Case Zero Gen 91.5 56.2 Gen+Aug 91.2 57.0 Gen+Adv 92.0‡ 58.4‡ Table 7: The comparisons of Gen+Adv with Gen and the data augmentation model (Gen+Aug). ‡ denotes that the improvement is statistically significant at p < 0.05, compared with Gen+Aug. Therefore we use existing training corpora and word embeddings for the data augmentation. First we randomly choose an argument word w in the training corpus and then swap it with another word w′ with the probability of p(w, w′). We choose top-20 nearest words to the original word w in the pre-trained word embedding as candidates of swapped words. The probability is defined as p(w, w′) ∝[v(w)⊤v(w′)]r, where r = 10. This probability is normalized by top-20 nearest words. We then merge this pseudo data and the original training corpus and train the model in the same way with the Gen model. We conducted several experiments and found that the model trained with the same amount of the pseudo data as the training corpus achieved the best result. Table 7 shows the results of the data augmentation model and the GAN-based model. Our Gen+Adv model performs better than the data augmented model. Note that our data augmentation model does not use raw corpora directly. 4.4 Discussion 4.4.1 Result Analysis We report the detailed performance for each case in Table 6. Among the three cases, zero anaphora resolution of the ACC and DAT cases is notoriously difficult. This is attributed to the fact that these ACC and DAT cases are fewer than the NOM case in the corpus as shown in Table 4. However, we can see that our proposed model, Gen+Adv, performs much better than the previous models especially for the ACC and DAT cases. Although the number of training instances of ACC and DAT is much smaller than that of NOM, our semisupervised model can learn PAS for all three cases using a raw corpus. This indicates that our model can work well in resource-poor cases. We analyzed the results of Gen+Adv by comparing with Gen and the model of Shibata et al. (2016). Here, we focus on the ACC and DAT cases because their improvements are notable. • “パックは洗って、分別してリサイクルに出 さなきゃいけないので手間がかかる。“ It is bothersome to wash, classify and recycle spent packs. In this sentence, the predicates “洗って” (wash), “分 別して” (classify), “(リサイクルに) 出す” (recycle) takes the same ACC argument, “パック” (pack). This is not so easy for Japanese PAS analysis because the actual ACC case marker “を” (wo) of “パック” (pack) is hidden by the topic marker “は” (wa). The Gen+Adv model can detect the correct argument while the model of Shibata et al. (2016) fails. In the Gen+Adv model, each predicate gives a high probability to “パック” (pack) as an ACC argument and finally chooses this. We found many examples similar to this and speculate that our model captures a kind of selectional preferences. The next example is an error of the DAT case by the Gen+Adv model. • “各専門分野もお任せ下さい。” please leave every professional field (to φ) The gold label of this DAT case (to φ) is NULL because this argument is not written in the sentence. 482 0 2 4 6 8 10 12 14 16 18 Epoch 60 70 80 90 F-value NOM ACC DAT 0 2 4 6 8 10 12 14 16 18 Epoch 20 30 40 50 60 F-value of Zero NOM ACC DAT Figure 3: Left: validator scores with the development set during adversarial training epochs. Right: generator scores for Zero with the development set during adversarial training epochs. However, the Gen+Adv model judged the DAT argument as “author”. Although we cannot specify φ as “author” only from this sentence, “author” is a possible argument depending on the context. 4.4.2 Validator Analysis We also evaluate the performance of the validator during the adversarial training with raw corpora. Figure 3 shows the validator performance and the generator performance of Zero on the development set. The validator score is evaluated with the outputs of generator. We notice that the NOM case and the other two cases have different curves in both graphs. This can be explained by the speciality of the NOM case. The NOM case has much more author/reader expressions than the other cases. The prediction of author/reader expressions depends not only on selectional preferences of predicates and arguments but on the whole of sentences. Therefore the validator that relies only on predicate and argument representations cannot predict author/reader expressions well. In the ACC and DAT cases, the scores of the generator and validator increase in the first epochs. This suggests that the validator learns the weakness of the generator and vice versa. However, in later epochs, the scores of the generator increase with fluctuation, while the scores of the validator saturates. This suggests that the generator gradually becomes stronger than the validator. 5 Related Work Shibata et al. (2016) proposed a neural networkbased PAS analysis model using local and global features. This model is based on the non-neural model of Ouchi et al. (2015). They achieved state-of-the-art results on case analysis and zero anaphora resolution using the KWDLC corpus. They use an external resource to extract selectional preferences. Since our model uses an external resource, we compare our model with the models of Shibata et al. (2016) and Ouchi et al. (2015). Ouchi et al. (2017) proposed a semantic role labeling-based PAS analysis model using GridRNNs. Matsubayashi and Inui (2017) proposed a case label selection model with feature-based neural networks. They conducted their experiments on NAIST Text Corpus (NTC) (Iida et al., 2007, 2016). NTC consists of newspaper articles, and does not include the annotations of author/reader expressions that are common in Japanese natural sentences. 6 Conclusion We proposed a novel Japanese PAS analysis model that exploits a semi-supervised adversarial training. The generator neural network learns Japanese PAS and selectional preferences, while the validator is trained against the generator errors. This validator enables the generator to be trained from raw corpora and enhance it with external knowledge. In the future, we will apply this semi-supervised training method to other NLP tasks. Acknowledgment This work was supported by JST CREST Grant Number JPMJCR1301, Japan and JST ACT-I Grant Number JPMJPR17U8, Japan. 483 References Denny Britz, Quoc Le, and Reid Pryzant. 2017. Effective domain mixing for neural machine translation. In Proceedings of the Second Conference on Machine Translation. Association for Computational Linguistics, Copenhagen, Denmark, pages 118–126. http://www.aclweb.org/anthology/W17-4712. John Duchi, Elad Hazan, and Yoram Singer. 2010. Adaptive subgradient methods for online learning and stochastic optimization. UCB/EECS-2010-24. Ian J. Goodfellow, Jean Pouget-Abadie, Mehdi Mirza, Bing Xu, David Warde-Farley, Sherjil Ozair, Aaron C. Courville, and Yoshua Bengio. 2014. Generative adversarial nets. In Advances in Neural Information Processing Systems 27: Annual Conference on Neural Information Processing Systems 2014, December 8-13 2014, Montreal, Quebec, Canada. pages 2672–2680. Ian J. Goodfellow, Jonathon Shlens, and Christian Szegedy. 2015. Explaining and harnessing adversarial examples. In Proceedings of the International Conference on Learning Representations (ICLR). Masatsugu Hangyo, Daisuke Kawahara, and Sadao Kurohashi. 2012. Building a diverse document leads corpus annotated with semantic relations. In Proceedings of the 26th Pacific Asia Conference on Language, Information, and Computation. Faculty of Computer Science, Universitas Indonesia, Bali,Indonesia, pages 535–544. Masatsugu Hangyo, Daisuke Kawahara, and Sadao Kurohashi. 2013. Japanese zero reference resolution considering exophora and author/reader mentions. In Proceedings of the 2013 Conference on Empirical Methods in Natural Language Processing. Association for Computational Linguistics, Seattle, Washington, USA, pages 924–934. Ryu Iida, Mamoru Komachi, Kentaro Inui, and Yuji Matsumoto. 2007. Annotating a japanese text corpus with predicate-argument and coreference relations. In Proceedings of the Linguistic Annotation Workshop, LAW@ACL 2007, Prague, Czech Republic, June 28-29, 2007. pages 132–139. Ryu Iida, Kentaro Torisawa, Jong-Hoon Oh, Canasai Kruengkrai, and Julien Kloetzer. 2016. Intrasentential subject zero anaphora resolution using multi-column convolutional neural network. In Proceedings of the 2016 Conference on Empirical Methods in Natural Language Processing. Association for Computational Linguistics, Austin, Texas, pages 1244–1254. https://aclweb.org/anthology/D161132. D. P. Kingma and J. Ba. 2015. Adam: A method for stochastic optimization. In Proceedings of the 53rd Annual Meeting of the Association for Computational Linguistics and the 7th International Joint Conference on Natural Language Processing (Volume 1: Long Papers). Jiwei Li, Will Monroe, Tianlin Shi, S´ebastien Jean, Alan Ritter, and Dan Jurafsky. 2017. Adversarial learning for neural dialogue generation. In Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing. Association for Computational Linguistics, Copenhagen, Denmark, pages 2157–2169. Pengfei Liu, Xipeng Qiu, and Xuanjing Huang. 2017a. Adversarial multi-task learning for text classification. In Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers). Association for Computational Linguistics, Vancouver, Canada, pages 1–10. Ting Liu, Yiming Cui, Qingyu Yin, Wei-Nan Zhang, Shijin Wang, and Guoping Hu. 2017b. Generating and exploiting large-scale pseudo training data for zero pronoun resolution. In Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers). Association for Computational Linguistics, Vancouver, Canada, pages 102–111. Yuichiroh Matsubayashi and Kentaro Inui. 2017. Revisiting the design issues of local models for japanese predicate-argument structure analysis. In Proceedings of the Eighth International Joint Conference on Natural Language Processing (Volume 2: Short Papers). Asian Federation of Natural Language Processing, Taipei, Taiwan, pages 128–133. Tomas Mikolov, Kai Chen, Greg Corrado, and Jeffrey Dean. 2013. Efficient estimation of word representations in vector space. volume abs/1301.3781. http://arxiv.org/abs/1301.3781. Takeru Miyato, Shin-ichi Maeda, Masanori Koyama, Ken Nakae, and Shin Ishii. 2016. Distributional smoothing by virtual adversarial examples. In Proceedings of the International Conference on Learning Representations (ICLR). Hiroki Ouchi, Hiroyuki Shindo, Kevin Duh, and Yuji Matsumoto. 2015. Joint case argument identification for japanese predicate argument structure analysis. In Proceedings of the 53rd Annual Meeting of the Association for Computational Linguistics and the 7th International Joint Conference on Natural Language Processing (Volume 1: Long Papers). Association for Computational Linguistics, Beijing, China, pages 961–970. Hiroki Ouchi, Hiroyuki Shindo, and Yuji Matsumoto. 2017. Neural modeling of multi-predicate interactions for japanese predicate argument structure analysis. In Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers). Association for Computational Linguistics, Vancouver, Canada, pages 1591– 1600. Michael Roth and Mirella Lapata. 2016. Neural semantic role labeling with dependency path embeddings. In Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics 484 (Volume 1: Long Papers). Association for Computational Linguistics, Berlin, Germany, pages 1192– 1202. Tim Salimans, Ian Goodfellow, Wojciech Zaremba, Vicki Cheung, Alec Radford, Xi Chen, and Xi Chen. 2016. Improved techniques for training gans. In D. D. Lee, M. Sugiyama, U. V. Luxburg, I. Guyon, and R. Garnett, editors, Advances in Neural Information Processing Systems 29. Curran Associates, Inc., pages 2234–2242. Ryohei Sasano and Sadao Kurohashi. 2011. A discriminative approach to japanese zero anaphora resolution with large-scale lexicalized case frames. In Proceedings of 5th International Joint Conference on Natural Language Processing. Asian Federation of Natural Language Processing, Chiang Mai, Thailand, pages 758–766. Tomohide Shibata, Daisuke Kawahara, and Sadao Kurohashi. 2016. Neural network-based model for japanese predicate argument structure analysis. In Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers). Association for Computational Linguistics, pages 1235–1244. Jost Tobias Springenberg. 2015. Unsupervised and semi-supervised learning with categorical generative adversarial networks. Sandeep Subramanian, Sai Rajeswar, Francis Dutil, Chris Pal, and Aaron Courville. 2017. Adversarial generation of natural language. In Proceedings of the 2nd Workshop on Representation Learning for NLP. Association for Computational Linguistics, Vancouver, Canada, pages 241–251. Xingxing Zhang, Jianpeng Cheng, and Mirella Lapata. 2017. Dependency parsing as head selection. In Proceedings of the 15th Conference of the European Chapter of the Association for Computational Linguistics: Volume 1, Long Papers. Association for Computational Linguistics, Valencia, Spain, pages 665–676.
2018
44
Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Long Papers), pages 485–495 Melbourne, Australia, July 15 - 20, 2018. c⃝2018 Association for Computational Linguistics 485 Improving Event Coreference Resolution by Modeling Correlations between Event Coreference Chains and Document Topic Structures Prafulla Kumar Choubey and Ruihong Huang Department of Computer Science and Engineering Texas A&M University (prafulla.choubey, huangrh)@tamu.edu Abstract This paper proposes a novel approach for event coreference resolution that models correlations between event coreference chains and document topical structures through an Integer Linear Programming formulation. We explicitly model correlations between the main event chains of a document with topic transition sentences, inter-coreference chain correlations, event mention distributional characteristics and sub-event structure, and use them with scores obtained from a local coreference relation classifier for jointly resolving multiple event chains in a document. Our experiments across KBP 2016 and 2017 datasets suggest that each of the structures contribute to improving event coreference resolution performance. 1 Introduction Event coreference resolution aims to identify and link event mentions in a document that refer to the same real-world event, which is vital for identifying the skeleton of a story and text understanding and is beneficial to numerous other NLP applications such as question answering and summarization. In spite of its importance, compared to considerable research for resolving coreferential entity mentions, far less attention has been devoted to event coreference resolution. Event coreference resolution thus remained a challenging task and the best performance remained low. Event coreference resolution presents unique challenges. Compared to entities, coreferential event mentions are fewer in a document and much more sparsely scattered across sentences. Figure 1 shows a typical news article. Here, the main entity, “President Chen”, appears frequently in evFigure 1: An example document to illustrate the characteristics of event (red) and entity (blue) coreference chains. ery sentence, while the main event “hearing” and its accompanying event “detention” are mentioned much less frequently. If we look more closely, referring back to the same entity serves a different purpose than referring to the same event. The protagonist entity of a story is involved in many events and relations; thus, the entity is referred back each time such an event or relation is described. In this example, the entity was mentioned when describing various events he participated or was involved in, including “detention”, “said”, “pointed out”, “remitted”, “have a chance”, “release”, “cheating”, “asked” and “returned”, as well as when describing several relations involving him, including “former president”, “his family” and “his wife”. In contrast, most events only appear once in a text, and there is less motivation to repeat them: a story is mainly formed by a se486 Dataset Type 0 1 2 3 4 > 4 richERE event 11 34 20 9 7 19 entity 34 33 14 6 3 10 ACE-05 event 5 33 19 10 9 24 entity 37 28 12 7 4 13 KBP 2015 event 15 34 12 9 6 24 KBP 2016 event 8 43 15 7 6 21 KBP 2017 event 12 49 13 7 4 15 Table 1: Percentages of adjacent (event vs. entity) mention pairs based on the number of sentences between two mentions. ries of related but different events. Essentially, (1) the same event is referred back only when a new aspect or further information of the event has to be described, and (2) repetitions of the same events are mainly used for content organization purposes and, consequently, correlate well with topic structures. Table 1 further shows the comparisons of positional patterns between event coreference and entity coreference chains, based on two benchmark datasets, ERE (Song et al., 2015) and ACE05 (Walker et al., 2006), where we paired each event (entity) mention with its nearest antecedent event (entity) mention and calculated the percentage of (event vs. entity) coreferent mention pairs based on the number of sentences between two mentions. Indeed, for entity coreference resolution, centering and nearness are striking properties (Grosz et al., 1995), and the nearest antecedent of an entity mention is mostly in the same sentence or in the immediately preceding sentence ( 70%). This is especially true for nominals and pronouns, two common types of entity mentions, where the nearest preceding mention that is also compatible in basic properties (e.g., gender, person and number) is likely to co-refer with the current mention. In contrast, coreferential event mentions are rarely from the same sentence ( 10%) and are often sentences apart. The sparse distribution of coreferent event mentions also applies to the three KBP corpora used in this work. To address severe sparsity of event coreference relations in a document, we propose a holistic approach to identify coreference relations between event mentions by considering their correlations with document topic structures. Our key observation is that event mentions make the backbone of a document and coreferent mentions of the same event play a key role in achieving a coherent content structure. For example, in figure 1, the events “hearing” and “detention” were mentioned in the headline (H), in the first sentence (S1) as a story overview, in the second sentence (S2) for transitioning to the body section of the story describing what happened during the hearing, and then in the fifth sentence (S5) for transitioning to the ending section of the story describing what happened after the hearing. By attaching individual event mentions to a coherent story and its topic structures, our approach recognizes event coreference relations that are otherwise not easily seen due to a mismatch of two event mentions’ local contexts or long distances between event mentions. We model several aspects of correlations between event coreference chains and document level topic structures, in an Integer Linear Programming (ILP) joint inference framework. Experimental results on the benchmark event coreference resolution dataset KBP-2016 (Ellis et al., 2016) and KBP 2017 (Getman et al., 2017) show that the ILP system greatly improves event coreference resolution performance by modeling different aspects of correlations between event coreferences and document topic structures, which outperforms the previous best system on the same dataset consistently across several event coreference evaluation metrics. 2 Correlations between Event Coreference Chains and Document Topic Structures We model four aspects of correlations. Correlations between Main Event Chains and Topic Transition Sentences: the main events of a document, e.g., “hearing” and “detention” in this example 1, usually have multiple coreferent event mentions that span over a large portion of the document and align well with the document topic layout structure (Choubey et al., 2018). While fine-grained topic segmentation is a difficult task in its own right, we find that topic transition sentences often overlap in content (for reminding purposes) and can be identified by calculating sentence similarities. For example, sentences S1, S2 and S5 in Figure 1 all mentioned the two main events and the main entity “President Chen”. We, therefore, encourage coreference links between event mentions that appear in topic transition sentences by designing constraints in ILP and modifying the objective function. In addition, to avoid fragmented partial event chains and 487 recover complete chains for the main events, we also encourage associating more coreferent event mentions to a chain that has a large stretch (the number of sentences between the first and the last event mention based on their textual positions). Correlations across Semantically Associated Event Chains: semantically associated events often co-occur in the same sentence. For example, mentions of the two main events “hearing” and “detention” co-occur across the document in sentences H, S1, S2 and S5. The correlation across event chains is not specific to global main events, for example, the local events “remitted” and “release” have their mentions co-occur in sentences S3 and S4 as well. In ILP, we leverage this observation and encourage creating coreference links between event mentions in sentences that contain other already known coreferent event mentions. Genre-specific Distributional Patterns: we model document level distributional patterns of coreferent event mentions that may be specific to a genre in ILP. Specifically, news article often begins with a summary of the overall story and then introduces the main events and their closely associated events. In subsequent paragraphs, detailed information of events may be introduced to provide supportive evidence to the main story. Thereby, a majority of event coreference chains tend to be initiated in the early sections of the document. Event mentions in the later paragraphs may exist as coreferent mentions of an established coreference chain or as singleton event mentions which, however, are less likely to initiate a new coreference chain. Inspired by this observation, we simply modify the objective function of ILP to encourage more event coreference links in early sections of a document. Subevents: subevents exist mainly to provide details and evidence for the parent event, therefore, the relation between subevents and their parent event presents another aspect of correlations between event relations and hierarchical document topic structures. Subevents may share the same lexical form as the parent event and cause spurious event coreference links (Araki et al., 2014). We observe that subevents referring to specific actions were seldomly referred back in a document and are often singleton events. Following the approach proposed by (Badgett and Huang, 2016), we identify such specific action events and improve event coreference resolution by specifying constraints in ILP to discourage coreference links between a specific action event and other event mentions. 3 Related Work Compared to entity coreference resolution (Lee et al., 2017; Clark and Manning, 2016a,b; Martschat and Strube, 2015; Lee et al., 2013), far less research was conducted for event coreference resolution. Most existing methods (Ahn, 2006; Chen et al., 2009; Cybulska and Vossen, 2015a,b) heavily rely on surface features, mainly event arguments (i.e., entities such as event participants, time, location, etc.) that were extracted from local contexts of two events, and determine that two events are coreferential if their arguments match. Often, a clustering algorithm, hierarchical Bayesian (Bejan and Harabagiu, 2010, 2014; Yang et al., 2015) or spectral clustering algorithms (Chen and Ji, 2009), is applied on top of a pairwise surface feature based classifier for inducing event clusters. However, identifying potential arguments, linking arguments to a proper event mention, and recognizing compatibilities between arguments are all error-prone (Lu et al., 2016). Joint event and entity coreference resolution (Lee et al., 2012), joint inferences of event detection and event coreference resolution (Lu and Ng, 2017), and iterative information propagation (Liu et al., 2014; Choubey and Huang, 2017a) have been proposed to mitigate argument mismatch issues. However, such methods are incapable of handling more complex and subtle cases, such as partial event coreference with incompatible arguments (Choubey and Huang, 2017a) and cases lacking informative local contexts. Consequently, many event coreference links were missing and the resulted event chains are fragmented. The low performance of event coreference resolution limited its uses in downstream applications. (?) shows that instead of human annotated event coreference relations, using system predicted relations resulted in a significant performance reduction in identifying the central event of a document. Moreover, the recent research by Moosavi and Strube (2017) found that the extensive use of lexical and surface features biases entity coreference resolvers towards seen mentions and do not generalize to unseen domains, and the finding can perfectly apply to event coreference resolution. Therefore, we propose to improve event coreference resolution by modeling correlations between event corefer488 ences and the overall topic structures of a document, which is more likely to yield robust and generalizable event coreference resolvers. 4 Modeling Event Coreference Chain Topic Structure Correlations Using Integer Linear Programming We model discourse level event-topic correlation structures by formulating the event coreference resolution task as an Integer Linear Programming (ILP) problem. Our baseline ILP system is defined over pairwise scores between event mentions obtained from a pairwise neural network-based coreference resolution classifier. 4.1 The Local Pairwise Coreference Resolution Classifier Our local pairwise coreference classifier uses a neural network model based on features defined for an event mention pair. It includes a common layer with 347 neurons shared between two event mentions to generate embeddings corresponding to word lemmas (300) and parts-of-speech (POS) tags (47). The common layer aims to enrich event word embeddings with the POS tags using the shared weight parameters. It also includes a second layer with 380 neurons to embed suffix1 and prefix 2 of event words, distances (euclidean, absolute and cosine) between word embeddings of two event lemmas and common arguments between two event mentions. The output from the second layer is concatenated and fed into the third neural layer with 10 neurons. The output embedding from the third layer is finally fed into an output layer with 1 neuron that generates a score indicating the confidence of assigning the given event pair to the same coreference cluster. All three layers and the output layer use the sigmoid activation function. 4.2 The Basic ILP for Event Coreference Resolution Let λ represents the set of all event mentions in a document, Λ denotes the set of all event mention pairs i.e. Λ = {< i, j > | < i, j > ∈ λ × λ and i < j} and pij = pcls(coref|i, j) represents the cost of assigning event mentions i and j to the same coreferent cluster, we can for1te, tor, or, ing, cy, id, ed, en, er, ee, pt, de, on, ion, tion, ation, ction, de, ve, ive, ce, se, ty, al, ar, ge, nd, ize, ze, it, lt 2re, in, at, tr, op mulate the baseline objective function that minimizes equation 1. Further we add constraints (equation 2) over each triplets of mentions to enforce transitivity (Denis et al., 2007; Finkel and Manning, 2008). This guarantees legal clustering by ensuring that xij = xjk = 1 implies xik = 1. ΘB = X i,j∈Λ −log(pij)xij −log(1 −pij)(¬xij) s.t. xij ∈{0, 1} (1) ¬xij + ¬xjk ≥¬xik (2) We then add constituent objective functions and constraints to the baseline ILP formulation to induce correlations between coreference chains and topical structures (ΘT ), discourage fragmented chains (ΘG), encourage semantic associations among chains (ΘC), model genre-specific distributional patterns (ΘD) and discourage subevents from having coreferent mentions (ΘS). They are described in the following subsections. 4.2.1 Modeling the Correlation between Main Event Chains and Topic Transition Sentences As shown in the example Figure 1, main events are likely to have mentions appear in topic transition sentences. Therefore, We add the following objective function (equation 3) to the basic objective function and add the new constraint 4 in order to encourage coreferent event mentions to occur in topic transition sentences. ΘT = X m,n∈Ω −log(smn)wmn −log(1 −smn)(¬wmn) s.t. wmn ∈{0, 1} (n −m) ≥|S|/θs (3) X i′∈ξm,j′∈ξn xi′j′ ≥wmn (4) Specifically, let ω represents the set of sentences in a document and Ωdenotes the set of sentence pairs i.e. Ω= {< m, n > | < m, n > ∈ω × ω and m < n}. Then, let sij = psim(simscore|m, n), which represents the similarity score between sentences m and n and |S| equals to the number of sentences in a given document. Here, the indicator variable wmn indicates if the two sentences m and n are topic transition sentences. Essentially, when two sentences have a high similarity score (> 0.5) and are not near (with |S|/θsor more sentences 489 apart, in our experiments we set θs to 5), this objective function ΘT tries to set the corresponding indicator variable wmn to 1. Then, we add constraint 4 to encourage coreferent event mentions to occur in topic transition sentences. Note that ξm refers to all the event mentions in sentence m, and xij is the indicator variable which is set to 1 if event mentions defined by index i and j are coreferent. Thus, the above constraint ensures that two topic transition sentences contain at least one coreferent event pair. Identifying Topic Transition Sentences Using Sentence Similarities: First, we use the unsupervised method based on weighted word embedding average proposed by Arora et al. (2016) to obtain sentence embeddings. We first compute the weighted average of words’ embeddings in a sentence, where the weight of a word w is given by a/(a+p(w)). Here, p(w) represents the estimated word frequency obtained from English Wikipedia and a is a small constant (1e-5). We then compute the first principal component of averaged word embeddings corresponding to sentences in a document and remove the projection on the first principal component from each averaged word embedding for each sentence. Then using the resulted averaged word embedding as the sentence embedding, we compute the similarity between two sentences as cosine similarity between their embeddings. We particularly choose this simple unsupervised model to reduce the reliance on any additional corpus for training a new model for calculating sentence similarities. This model was found to perform comparably to supervised RNN-LSTM based models for the semantic textual similarity task. Constraints for Avoiding Fragmented Partial Event Chains: The above equations (3-4) consider a pair of sentences and encourage two coreferent event mentions to appear in a pair of topic transition sentences. But the local nature of these constraints can lead to fragmented main event chains. Therefore, we further model the distributional characteristics of global event chains and encourage the main event chains to have a large number of coreferential mentions and a long stretch (the number of sentences that are present in between the first and last event mention of a chain), to avoid creating partial chains. Specifically, we add the following objective function (equation 5) and the new constraints (equation 6 and 7): ΘG = − X i,j∈µ γij (5) σij = X k<i ¬xki ∧ X j<l ¬xjl ∧xij σij ∈{0, 1} (6) Γi = X k,i∈Λ xki + X i,j∈Λ xij M(1 −yij) ≥(ϕ[j] −ϕ[i]).σij −⌈0.75 (|S|)⌉ γij −Γi −Γj ≥M.yij Γi, Γj, γij ∈Z; Γi, Γj, γij ≥0; yij ∈{0, 1} (7) First, we define an indicator variable σij by equation 6 3, corresponding to each event mention pair, that takes value 1 if (1) the event mentions at index i and j are coreferent; (2) the event mention at index i doesn’t corefer to any of the mentions preceding it; and (3) mention at index j doesn’t corefer to any event mention following it. Essentially, setting σij to 1 defines an event chain that starts from the event mention i and ends at the event mention j. Then with equation 7, variable σij is used to identify main event chains as those chains which are extended to at least 75% of the document. When a chain is identified as a global chain, we encourage it to have more coreferential mentions. Here, Γi (Γj) equals the sum of indicator variables x corresponding to event pairs that include the event mention at index i (j) i.e. the number of mentions that are coreferent to i (j), ϕ[i] (ϕ[j]) represents the sentence number of event mention i (j), M is a large positive number and yij represents a slack variable that takes the value 0 if the event chain represented by σij is a global chain. Given σi,j is identified as a global chain, variable γij equals the sum of variables Γi and Γj and is used in the objective function ΘG (equation 5) to encourage more coreferential mentions. 3 Equation 6 can be implemented as np + ns ≤ X k<i xki + X j<l xjl −xij + (np + ns + 1).σij X k<i xki + X j<l xjl −xij + (np + ns + 1).σij ≥0 where np, ns represent the number of event mentions preceding event mention i and the number of event mentions following event mention j respectively. 490 4.2.2 Cross-chain Inferences As illustrated through Figure 1, semantically related events tend to have their mentions co-occur within the same sentence. So, we define the objective function (equation 8) and constraints (9) to favor a sentence with a mention from one event chain to also contain a mention from another event chain, if the two event chains are known to have event mentions co-occur in several other sentences. ΘC = − X m,n∈Ω Φmn (8) Φmn = X i∈ξm,j∈ξn xij |ξm| > 1; |ξn| > 1; Φmn ∈Z; Φmn ≥0 (9) To do so, we first define a variable φmn that equals the number of coreferent event pairs in a sentence pair, with each sentence having more than one event mention. We then define ΘC to minimize the negative sum of φmn. Following the previous notations, ξm in the above equation represents the event mentions in sentence m. 4.2.3 Modeling Segment-wise Distributional Patterns The position of an event mention in a document has a direct influence on event coreference chains. Event mentions that occur in the first few paragraphs are more likely to initiate an event chain. On the other hand, event mentions in later parts of a document may be coreferential with a previously seen event mention but are extremely unlikely to begin a new coreference chain. This distributional association is even stronger in the journalistic style of writing. We model this through a simple objective function and constraints (equation 10). ΘD = − X i∈ξm,j∈ξn xij + X k∈ξp,l∈ξq xkl s.t. m, n < ⌊α|S|⌋; p, q > ⌈β|S|⌉ α ∈[0, 1]; β ∈[0, 1] (10) Specifically, for the event pairs that belong to the first α (or the last β) sentences in a document, we add the negative (positive) sum of their indicator variables (x) in objective function ΘD. The equation 10 is meant to inhibit coreference links between event mentions that exist within the latter half of document. They do not influence the links within event chains that start early and extend till the later segments of the document. It is also important to understand that positionbased features used in entity coreference resolution (Haghighi and Klein, 2007) are usually defined for an entity pair. However, we model the distributional patterns of an event chain in a document. 4.2.4 Restraining Subevents from Being Included in Coreference Chains Subevents are known to be a major source of false coreference links due to their high surface similarity with their parent events. Therefore, we discourage subevents from being included in coreference chains in our model and modify the global optimization goal by adding a new objective function (equation 11). ΘS = X s∈S Γs (11) where S represents the set of subevents in a document. We define the objective function ΘS as the sum of Γs, where Γs equals the number of mentions that are coreferent to s. Then our goal is to minimize ΘS and restrict the subevents from being included in coreference chains. We identify probable subevents by using surface syntactic cues corresponding to identifying a sequence of events in a sentence (Badgett and Huang, 2016). In particular, a sequence of two or more verb event mentions in a conjunction structure are extracted as subevents. 4.3 The full ILP Model and the Parameters The equations 3-11 model correlations between non-local structures within or across event chains and document topical structures. We perform ILP inference for coreference resolution by optimizing a global objective function(Θ), defined in equation 12, that incorporates prior knowledge by means of hard or soft constraints. Θ = κBΘB + κT ΘT + κGΘG + κCΘC + κDΘD + κSΘS (12) Here, all the κ parameters are floating point constants. For the sake of simplicity, we set κB and κT to 1.0 and κG = κC. Then we estimate the parameters κG(κC) and κD through 2-d grid search in range [0, 5.0] at the interval of 0.5 on a held out training data. We found that the best performance was obtained for κC = κG = 0.5 and κD = 2.5. Since, ΘS aims to inhibit subevents from being included in coreference chains, we set a high value for κS and found that, indeed, the performance 491 remained same for all the values of κS in range [5.0,15.0]. In our final model, we keep κS = 10.0. Also, we found that the performance is roughly invariant to the parameters κG and κC if they are set to values between 0.5 and 2.5. In our experiments, we process each document to define a distinct ILP problem which is solved using the PuLP library (Mitchell et al., 2011). 5 Evaluation 5.1 Experimental Setup We trained our ILP system on the KBP 2015 (Ellis et al., 2015) English dataset and evaluated the system on KBP 2016 and KBP 2017 English datasets4. All the KBP corpora include documents from both discussion forum5 and news articles. But as the goal of this study is to leverage discourse level topic structure in a document for improving event coreference resolution performance, we only evaluate the ILP system using regular documents (news articles) in the KBP corpora. Specifically, we train our event extraction system and local coreference resolution classifier on 310 documents from the KBP 2015 corpus that consists of both discussion forum documents and news articles, tune the hyper-parameters corresponding to ILP using 50 news articles6 from the KBP 2015 corpus and evaluate our system on 4The ECB+ (Cybulska and Vossen, 2014) corpus is another commonly used dataset for evaluating event coreference resolution performance. But we determined that this corpus is not appropriate for evaluating our ILP model that explicitly focuses on using discourse level topic structures for event coreference resolution. Particularly, the ECB+ corpus was created to facilitate both cross-document and indocument event coreference resolution research. Thus, the documents in the corpus were grouped based on several common topics and in each document, event mentions and coreference relations were only annotated selectively in sentences that are on a common topic. When the annotated sentences in each document are stitched together, they do not well reveal the original document structure, which makes the ECB+ corpus a bad choice for evaluating our approach. In addition, due to the selective annotation issue, in-document event coreference resolution with the ECB+ corpus is somewhat easier than with the KBP corpus, which partly explained the significant differences of published in-document event coreference resolution results on the two corpora. 5Each discussion forum document consists of a series of posts in an online discussion thread, which lacks coherent discourse structures as a regular document. Therefore, only news articles in the KBP corpora are appropriate for evaluating our approach. 6KBP 2015 dataset consists of 181 and 179 documents from discussion forum and news articles respectively. We randomly picked 50 documents from news articles for tuning ILP hyper-parameters and remaining 310 documents for training classifiers. news articles from the official KBP 2016 and 2017 evaluation corpora7 respectively. For direct comparisons, the results reported for the baselines, including the previous state-of-the-art model, were based on news articles in the test datasets as well. We report the event coreference resolution results based on the version 1.8 of the official KBP 2017 scorer. The scorer employs four coreference scoring measures, namely B3 (Bagga and Baldwin, 1998), CEAFe (Luo, 2005), MUC (Vilain et al., 1995) and BLANC (Recasens and Hovy, 2011) and the unweighted average of their F1 scores (AV GF1). 5.2 Event Mention Identification Lu and Ng (2017) Ours Corpus Untyped Typed Untyped Typed KBP 2016 60.13 49.00 60.03 45.45 KBP 2017 62.89 49.34 Table 2: F1 scores for event mention extraction on the KBP 2016 and 2017 corpus We use an ensemble of multi-layer feed forward neural network classifiers to identify event mentions (Choubey and Huang, 2017b). All basic classifiers are trained on features derived from the local context of words. The features include the embedding of word lemma, absolute difference between embeddings of word and its lemma, prefix and suffix of word and pos-tag and dependency relation of its context words, modifiers and governor. We trained 10 classifiers on same feature sets with slightly different neural network architectures and different training parameters including dropout rate, optimizer, learning rate, epochs and network initialization. All the classifiers use relu, tanh and softmax activations in the input, hidden and output layers respectively. We use GloVe vectors (Pennington et al., 2014) for word embeddings and one-hot vectors for pos-tag and dependency relations in each individual model. Postagging, dependency parsing, named entity recognition and entity coreference resolution are performed using Stanford CoreNLP (Manning et al., 2014) Table 2 shows the event mention identification results. We report the F1 score for event mention identification based on the KBP scorer, which considers a mention correct if its span, type and sub7There are 85 and 83 news articles in KBP 2016 and 2017 corpora respectively. 492 KBP 2016 KBP 2017 Model B3 CEAFe MUC BLANC AV G B3 CEAFe MUC BLANC AV G Local classifier 51.47 47.96 26.29 30.82 39.13 50.24 48.47 30.81 29.94 39.87 Clustering 46.97 41.95 18.79 26.88 33.65 46.51 40.21 23.10 25.08 33.72 Basic ILP 51.44 47.77 26.65 30.95 39.19 50.4 48.49 31.33 30.58 40.2 +Topic structure 51.44 47.94 28.86 31.87 40.03 50.39 48.23 33.08 31.26 40.74 +Cross-chain 51.09 47.53 31.27 33.07 40.74 50.39 47.67 35.15 31.88 41.27 +Distribution 51.06 48.28 33.53 33.63 41.62 50.42 48.67 37.52 32.08 42.17 +Subevent 51.67 49.1 34.08 34.08 42.23 50.35 48.61 37.24 31.94 42.04 Joint learning 50.16 48.59 32.41 32.72 40.97 Table 3: Results for event coreference resolution systems on the KBP 2016 and 2017 corpus. Joint learning results correspond to the actual result files evaluated in (Lu and Ng, 2017). The file was obtained from the authors. type are the same as the gold mention and assigns a partial score if span partially overlaps with the gold mention. We also report the event mention identification F1 score that only considers mention spans and ignores mention types. We can see that compared to the recent system by (Lu and Ng, 2017) which conducts joint inferences of both event mention detection and event coreference resolution, detecting types for event mentions is a major bottleneck to our event extraction system. Note that the official KBP 2017 event coreference resolution scorer considers a mention pair coreferent if they strictly match on the event type and subtype, which has been discussed recently to be too conservative (Mitamura et al., 2017). But since improving event mention type detection is not our main goal, we therefore relax the constraints and do not consider event mention type match while evaluating event coreference resolution systems. This allows us to directly interpret the influences of document structures in the event coreference resolution task by overlooking any bias from upstream tasks. 5.3 Baseline Systems We compare our document-structure guided event coreference resolution model with three baselines. Local classifier performs greedy merging of event mentions using scores predicted by the local pairwise coreference resolution classifier. An event mention is merged to its best matching antecedent event mention if the predicted score between the two event mentions is highest and greater than 0.5. Clustering performs spectral graph clustering (Pedregosa et al., 2011), which represents commonly used clustering algorithms for event coreference resolution. We used the relation between the size of event mentions and the number of coreference clusters in training data for pre-specifying the number of clusters. Its low performance is partially accounted to the difficulty of determining the number of coreference clusters. Joint learning uses a structured conditional random field model that operates at the document level to jointly model event mention extraction, event coreference resolution and an auxiliary task of event anaphoricity determination. This model has achieved the best event coreference resolution performance to date on the KBP 2016 corpus (Lu and Ng, 2017). 5.4 Our Systems We gradually augment the ILP baseline with additional objective functions and constraints described in sub-sections 4.2.1, 4.2.2, 4.2.3 and 4.2.4. In all the systems below, we combine objective functions with their corresponding coefficients (as described in sub-section 4.3). The Basic ILP System formulates event coreference resolution as an ILP optimization task. It uses scores produced by the local pairwise classifier as weights on variables that represent ILP assignments for event coreference relations. (Equations 1, 2). +Topic structure incorporates the topical structure and the characteristics of main event chains in baseline ILP system (Equations 1-5). +Cross-chain adds constraints and objective function defined for cross-chain inference to the Topical structure system (Equations 1-8). +Distribution further adds distributional patterns to the Cross-chain system (Equations 1-10). +Subevent (Full) optimizes the objective function defined in equation 12 by considering all the constraints defined in 1-11, including constraints for modeling subevent structures. 5.5 Results and Analysis Table 3 shows performance comparisons of our ILP systems with other event coreference resolu493 tion approaches including the recent joint learning approach (Lu and Ng, 2017) which is the best performing model on the KBP 2016 corpus. For both datasets, the full discourse structure augmented model achieved superior performance compared to the local classifier based system. The improvement is observed across all metrics with average F1 gain of 3.1 for KBP 2016 and 2.17 for KBP 2017. Most interestingly, we see over 28% improvement in MUC F1 score which directly evaluates the pairwise coreference link predictions. This implies that the document level structures, indeed, helps in linking more coreferent event mentions, which otherwise are difficult with the local classifier trained on lexical and surface features. Our ILP based system also outperforms the previous best model on the KBP 2016 corpus (Lu and Ng, 2017) consistently using all the evaluation metrics, with an overall improvement of 1.21 based on the average F1 scores. In Table 3, we also report the F1 scores when we increasingly add each type of structure in the ILP baseline. Among different scoring metrics, all structures positively contributed to the MUC and BLANC scores for KBP 2016 corpus. However, subevent based constraints slightly reduced the F1 scores on KBP 2017 corpus. Based on our preliminary analysis, this can be accounted to the simple method applied for subevent extraction. We only extracted 31 subevents in KBP 2017 corpus compared to 211 in KBP 2016 corpus. 5.6 Discussions on Generalizability The correlations between event coreference chains and document topic structures are not specific to news articles and widely exist. Several main distributional characteristics of coreferent event mentions, including 1) main event coreference chains often have extended presence and have mentions scattered across segments, and 2) semantically correlated events often have their respective event mentions co-occur in a sentence, directly apply to other sources of texts such as clinical notes. But certain distributional characteristics are genre specific. For instance, while it is common to observe more coreferent event mentions early on in a news article, coreference chains in a clinical note often align well with pre-defined segments like the history of present illness, description of a visit and treatment plan. Thus, the objective functions and constraints defined in equations 1-8 can be directly applied for other domains as well, while other structures like segment-wise distributional patterns may require alteration based on domainspecific knowledge. 6 Conclusions and the Future Work We have presented an ILP based joint inference system for event coreference resolution that utilizes scores predicted by a pairwise event coreference resolution classifier, and models several aspects of correlations between event coreference chains and document level topic structures, including the correlation between the main event chains and topic transition sentences, interdependencies among event coreference chains, genre-specific coreferent mention distributions and subevents. We have shown that these structures are generalizable by conducting experiments on both the KBP 2016 and KBP 2017 datasets. Our model outperformed the previous state-of-the-art model across all coreference scoring metrics. In the future, we will explore the use of additional discourse structures that correlate highly with event coreference chains. Moreover, we will extend this work to other domains such as biomedical domains. Acknowledgments This work was partially supported by the National Science Foundation via NSF Award IIS-1755943. Disclaimer: the views and conclusions contained herein are those of the authors and should not be interpreted as necessarily representing the official policies or endorsements, either expressed or implied, of NSF or the U.S. Government. References David Ahn. 2006. The stages of event extraction. In Proceedings of the Workshop on Annotating and Reasoning About Time and Events. Association for Computational Linguistics, Stroudsburg, PA, USA, ARTE ’06, pages 1–8. http://dl.acm.org/citation.cfm?id=1629235.1629236. Jun Araki, Zhengzhong Liu, Eduard H Hovy, and Teruko Mitamura. 2014. Detecting subevent structure for event coreference resolution. In LREC. pages 4553–4558. Sanjeev Arora, Yuanzhi Li, Yingyu Liang, Tengyu Ma, and Andrej Risteski. 2016. A latent variable model approach to pmi-based word embeddings. Transactions of the Association for Computational Linguistics 4:385–399. 494 Allison Badgett and Ruihong Huang. 2016. Extracting subevents via an effective two-phase approach. In Proceedings of the 2016 Conference on Empirical Methods in Natural Language Processing. pages 906–911. Amit Bagga and Breck Baldwin. 1998. Algorithms for scoring coreference chains. In The first international conference on language resources and evaluation workshop on linguistics coreference. Granada, volume 1, pages 563–566. Cosmin Adrian Bejan and Sanda Harabagiu. 2010. Unsupervised event coreference resolution with rich linguistic features. In Proceedings of the 48th Annual Meeting of the Association for Computational Linguistics. Association for Computational Linguistics, pages 1412–1422. Cosmin Adrian Bejan and Sanda Harabagiu. 2014. Unsupervised event coreference resolution. Computational Linguistics 40(2):311–347. Zheng Chen and Heng Ji. 2009. Graph-based event coreference resolution. In Proceedings of the 2009 Workshop on Graph-based Methods for Natural Language Processing. Association for Computational Linguistics, pages 54–57. Zheng Chen, Heng Ji, and Robert Haralick. 2009. A pairwise event coreference model, feature impact and evaluation for event coreference resolution. In Proceedings of the workshop on events in emerging text types. Association for Computational Linguistics, pages 17–22. Prafulla Kumar Choubey and Ruihong Huang. 2017a. Event coreference resolution by iteratively unfolding inter-dependencies among events. In Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing. pages 2124–2133. Prafulla Kumar Choubey and Ruihong Huang. 2017b. Tamu at kbp 2017: Event nugget detection and coreference resolution. In Proceedings of TAC KBP 2017 Workshop, National Institute of Standards and Technology. Prafulla Kumar Choubey, Kaushik Raju, and Ruihong Huang. 2018. Identifying the most dominant event in a news article by mining event coreference relations. In Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 2 (Short Papers). volume 2, pages 340–345. Kevin Clark and Christopher D Manning. 2016a. Deep reinforcement learning for mention-ranking coreference models. In Proceedings of the 2016 Conference on Empirical Methods in Natural Language Processing. pages 2256–2262. Kevin Clark and Christopher D Manning. 2016b. Improving coreference resolution by learning entitylevel distributed representations. In Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers). volume 1, pages 643–653. Agata Cybulska and Piek Vossen. 2014. Using a sledgehammer to crack a nut? lexical diversity and event coreference resolution. In LREC. pages 4545– 4552. Agata Cybulska and Piek Vossen. 2015a. Translating granularity of event slots into features for event coreference resolution. In Proceedings of the The 3rd Workshop on EVENTS: Definition, Detection, Coreference, and Representation. pages 1–10. A.K. Cybulska and P.T.J.M. Vossen. 2015b. Bag of events approach to event coreference resolution. supervised classification of event templates. Lecture Notes in Computer Science (9042). 978-3-31918117-2. Pascal Denis, Jason Baldridge, et al. 2007. Joint determination of anaphoricity and coreference resolution using integer programming. In HLT-NAACL. pages 236–243. Joe Ellis, Jeremy Getman, Dana Fore, Neil Kuster, Zhiyi Song, Ann Bies, and Stephanie Strassel. 2015. Overview of linguistic resources for the tac kbp 2015 evaluations: Methodologies and results. In Proceedings of TAC KBP 2015 Workshop, National Institute of Standards and Technology. pages 16–17. Joe Ellis, Jeremy Getman, Neil Kuster, Zhiyi Song, Ann Bies, and Stephanie Strassel. 2016. Overview of linguistic resources for the tac kbp 2016 evaluations: Methodologies and results. In Proceedings of TAC KBP 2016 Workshop, National Institute of Standards and Technology. Jenny Rose Finkel and Christopher D Manning. 2008. Enforcing transitivity in coreference resolution. In Proceedings of the 46th Annual Meeting of the Association for Computational Linguistics on Human Language Technologies: Short Papers. Association for Computational Linguistics, pages 45–48. Jeremy Getman, Joe Ellis, Zhiyi Song, Jennifer Tracey, and Stephanie Strassel. 2017. Overview of linguistic resources for the tac kbp 2017 evaluations: Methodologies and results. In Proceedings of TAC KBP 2017 Workshop, National Institute of Standards and Technology. Barbara J Grosz, Scott Weinstein, and Aravind K Joshi. 1995. Centering: A framework for modeling the local coherence of discourse. Computational linguistics 21(2):203–225. Aria Haghighi and Dan Klein. 2007. Unsupervised coreference resolution in a nonparametric bayesian model. In Proceedings of the 45th annual meeting of the association of computational linguistics. pages 848–855. 495 Heeyoung Lee, Angel Chang, Yves Peirsman, Nathanael Chambers, Mihai Surdeanu, and Dan Jurafsky. 2013. Deterministic coreference resolution based on entity-centric, precision-ranked rules. Computational Linguistics 39(4):885–916. Heeyoung Lee, Marta Recasens, Angel Chang, Mihai Surdeanu, and Dan Jurafsky. 2012. Joint entity and event coreference resolution across documents. In Proceedings of the 2012 Joint Conference on Empirical Methods in Natural Language Processing and Computational Natural Language Learning. Association for Computational Linguistics, pages 489– 500. Kenton Lee, Luheng He, Mike Lewis, and Luke Zettlemoyer. 2017. End-to-end neural coreference resolution. In Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing. pages 188–197. Zhengzhong Liu, Jun Araki, Eduard H Hovy, and Teruko Mitamura. 2014. Supervised withindocument event coreference using information propagation. In LREC. pages 4539–4544. Jing Lu and Vincent Ng. 2017. Joint learning for event coreference resolution. In Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers). volume 1, pages 90–101. Jing Lu, Deepak Venugopal, Vibhav Gogate, and Vincent Ng. 2016. Joint inference for event coreference resolution. In Proceedings of COLING 2016, the 26th International Conference on Computational Linguistics: Technical Papers. pages 3264–3275. Xiaoqiang Luo. 2005. On coreference resolution performance metrics. In Proceedings of the conference on human language technology and empirical methods in natural language processing. Association for Computational Linguistics, pages 25–32. Christopher D. Manning, Mihai Surdeanu, John Bauer, Jenny Finkel, Steven J. Bethard, and David McClosky. 2014. The Stanford CoreNLP natural language processing toolkit. In Association for Computational Linguistics (ACL) System Demonstrations. pages 55–60. http://www.aclweb.org/anthology/P/P14/P14-5010. Sebastian Martschat and Michael Strube. 2015. Latent structures for coreference resolution. Transactions of the Association of Computational Linguistics 3(1):405–418. Teruko Mitamura, Zhengzhong Liu, and Eduard Hovy. 2017. Events detection, coreference and sequencing: Whats next? overview of the tac kbp 2017 event track. In Proceedings of TAC KBP 2017 Workshop, National Institute of Standards and Technology. Stuart Mitchell, Michael OSullivan, and Iain Dunning. 2011. Pulp: a linear programming toolkit for python. The University of Auckland, Auckland, New Zealand, http://www. optimization-online. org/DB FILE/2011/09/3178. pdf . Nafise Sadat Moosavi and Michael Strube. 2017. Lexical features in coreference resolution: To be used with caution. In Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (Volume 2: Short Papers). volume 2, pages 14–19. F. Pedregosa, G. Varoquaux, A. Gramfort, V. Michel, B. Thirion, O. Grisel, M. Blondel, P. Prettenhofer, R. Weiss, V. Dubourg, J. Vanderplas, A. Passos, D. Cournapeau, M. Brucher, M. Perrot, and E. Duchesnay. 2011. Scikit-learn: Machine learning in Python. Journal of Machine Learning Research 12:2825–2830. Jeffrey Pennington, Richard Socher, and Christopher D. Manning. 2014. Glove: Global vectors for word representation. In Empirical Methods in Natural Language Processing (EMNLP). pages 1532– 1543. http://www.aclweb.org/anthology/D14-1162. Marta Recasens and Eduard Hovy. 2011. Blanc: Implementing the rand index for coreference evaluation. Natural Language Engineering 17(4):485– 510. Zhiyi Song, Ann Bies, Stephanie Strassel, Tom Riese, Justin Mott, Joe Ellis, Jonathan Wright, Seth Kulick, Neville Ryant, and Xiaoyi Ma. 2015. From light to rich ere: annotation of entities, relations, and events. In Proceedings of the The 3rd Workshop on EVENTS: Definition, Detection, Coreference, and Representation. pages 89–98. Marc Vilain, John Burger, John Aberdeen, Dennis Connolly, and Lynette Hirschman. 1995. A modeltheoretic coreference scoring scheme. In Proceedings of the 6th conference on Message understanding. Association for Computational Linguistics, pages 45–52. Christopher Walker, Medero Strassel, Maeda Julie, and Kazuaki. 2006. Ace 2005 multilingual training corpus. In Linguistic Data Consortium, LDC Catalog No.: LDC2006T06.. Bishan Yang, Claire Cardie, and Peter Frazier. 2015. A hierarchical distance-dependent bayesian model for event coreference resolution. Transactions of the Association of Computational Linguistics 3(1):517– 528.
2018
45
Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Long Papers), pages 496–505 Melbourne, Australia, July 15 - 20, 2018. c⃝2018 Association for Computational Linguistics 496 DSGAN: Generative Adversarial Training for Distant Supervision Relation Extraction Pengda Qin♯, Weiran Xu♯, William Yang Wang♭ ♯Beijing University of Posts and Telecommunications, China ♭University of California, Santa Barbara, USA {qinpengda, xuweiran}@bupt.edu.cn {william}@cs.ucsb.edu Abstract Distant supervision can effectively label data for relation extraction, but suffers from the noise labeling problem. Recent works mainly perform soft bag-level noise reduction strategies to find the relatively better samples in a sentence bag, which is suboptimal compared with making a hard decision of false positive samples in sentence level. In this paper, we introduce an adversarial learning framework, which we named DSGAN, to learn a sentencelevel true-positive generator. Inspired by Generative Adversarial Networks, we regard the positive samples generated by the generator as the negative samples to train the discriminator. The optimal generator is obtained until the discrimination ability of the discriminator has the greatest decline. We adopt the generator to filter distant supervision training dataset and redistribute the false positive instances into the negative set, in which way to provide a cleaned dataset for relation classification. The experimental results show that the proposed strategy significantly improves the performance of distant supervision relation extraction comparing to state-of-the-art systems. 1 Introduction Relation extraction is a crucial task in the field of natural language processing (NLP). It has a wide range of applications including information retrieval, question answering, and knowledge base completion. The goal of relation extraction system is to predict relation between entity pair in a sentence (Zelenko et al., 2003; Bunescu and Mooney, 2005; GuoDong et al., 2005). For examDS data space DS true positive data DS false positive data DS negative data The decision boundary of DS data The desired decision boundary Figure 1: Illustration of the distant supervision training data distribution for one relation type. ple, given a sentence “The [owl]e1 held the mouse in its [claw]e2.”, a relation classifier should figure out the relation Component-Whole between entity owl and claw. With the infinite amount of facts in real world, it is extremely expensive, and almost impossible for human annotators to annotate training dataset to meet the needs of all walks of life. This problem has received increasingly attention. Fewshot learning and Zero-shot Learning (Xian et al., 2017) try to predict the unseen classes with few labeled data or even without labeled data. Differently, distant supervision (Mintz et al., 2009; Hoffmann et al., 2011; Surdeanu et al., 2012) is to efficiently generate relational data from plain text for unseen relations with distant supervision (DS). However, it naturally brings with some defects: the resulted distantly-supervised training samples are often very noisy (shown in Figure 1), which is the main problem of impeding the performance (Roth et al., 2013). Most of the current state-of-the-art methods (Zeng et al., 2015; Lin et al., 2016) make the denoising operation in the sentence bag of entity pair, and integrate this process into the distant supervision relation ex497 traction. Indeed, these methods can filter a substantial number of noise samples; However, they overlook the case that all sentences of an entity pair are false positive, which is also the common phenomenon in distant supervision datasets. Under this consideration, an independent and accurate sentence-level noise reduction strategy is the better choice. In this paper, we design an adversarial learning process (Goodfellow et al., 2014; Radford et al., 2015) to obtain a sentence-level generator that can recognize the true positive samples from the noisy distant supervision dataset without any supervised information. In Figure 1, the existence of false positive samples makes the DS decision boundary suboptimal, therefore hinders the performance of relation extraction. However, in terms of quantity, the true positive samples still occupy most of the proportion; this is the prerequisite of our method. Given the discriminator that possesses the decision boundary of DS dataset (the brown decision boundary in Figure 1), the generator tries to generate true positive samples from DS positive dataset; Then, we assign the generated samples with negative label and the rest samples with positive label to challenge the discriminator. Under this adversarial setting, if the generated sample set includes more true positive samples and more false positive samples are left in the rest set, the classification ability of the discriminator will drop faster. Empirically, we show that our method has brought consistent performance gains in various deep-neural-network-based models, achieving strong performances on the widely used New York Times dataset (Riedel et al., 2010). Our contributions are three-fold: • We are the first to consider adversarial learning to denoise the distant supervision relation extraction dataset. • Our method is sentence-level and modelagnostic, so it can be used as a plug-and-play technique for any relation extractors. • We show that our method can generate a cleaned dataset without any supervised information, in which way to boost the performance of recently proposed neural relation extractors. In Section 2, we outline some related works on distant supervision relation extraction. Next, we describe our adversarial learning strategy in Section 3. In Section 4, we show the stability analyses of DSGAN and the empirical evaluation results. And finally, we conclude in Section 5. 2 Related Work To address the above-mentioned data sparsity issue, Mintz et al. (2009) first align unlabeled text corpus with Freebase by distant supervision. However, distant supervision inevitably suffers from the wrong labeling problem. Instead of explicitly removing noisy instances, the early works intend to suppress the noise. Riedel et al. (2010) adopt multi-instance single-label learning in relation extraction; Hoffmann et al. (2011) and Surdeanu et al. (2012) model distant supervision relation extraction as a multi-instance multi-label problem. Recently, some deep-learning-based models (Zeng et al., 2014; Shen and Huang, 2016) have been proposed to solve relation extraction. Naturally, some works try to alleviate the wrong labeling problem with deep learning technique, and their denoising process is integrated into relation extraction. Zeng et al. (2015) select one most plausible sentence to represent the relation between entity pairs, which inevitably misses some valuable information. Lin et al. (2016) calculate a series of soft attention weights for all sentences of one entity pair and the incorrect sentences can be down-weighted; Base on the same idea, Ji et al. (2017) bring the useful entity information into the calculation of the attention weights. However, compared to these soft attention weight assignment strategies, recognizing the true positive samples from distant supervision dataset before relation extraction is a better choice. Takamatsu et al. (2012) build a noise-filtering strategy based on the linguistic features extracted from many NLP tools, including NER and dependency tree, which inevitably suffers the error propagation problem; while we just utilize word embedding as the input information. In this work, we learn a true-positive identifier (the generator) which is independent of the relation prediction of entity pairs, so it can be directly applied on top of any existing relation extraction classifiers. Then, we redistribute the false positive samples into the negative set, in which way to make full use of the distantly labeled resources. 498 3 Adversarial Learning for Distant Supervision In this section, we introduce an adversarial learning pipeline to obtain a robust generator which can automatically discover the true positive samples from the noisy distantly-supervised dataset without any supervised information. The overview of our adversarial learning process is shown in Figure 2. Given a set of distantly-labeled sentences, the generator tries to generate true positive samples from it; But, these generated samples are regarded as negative samples to train the discriminator. Thus, when finishing scanning the DS positive dataset one time, the more true positive samples that the generator discovers, the sharper drop of performance the discriminator obtains. After adversarial training, we hope to obtain a robust generator that is capable of forcing discriminator into maximumly losing its classification ability. In the following section, we describe the adversarial training pipeline between the generator and the discriminator, including the pre-training strategy, objective functions and gradient calculation. Because the generator involves a discrete sampling step, we introduce a policy gradient method to calculate gradients for the generator. 3.1 Pre-Training Strategy Both the generator and the discriminator require the pre-training process, which is the common setting for GANs (Cai and Wang, 2017; Wang et al., 2017). With the better initial parameters, the adversarial learning is prone to convergence. As presented in Figure 2, the discriminator is pre-trained with DS positive dataset P (label 1) and DS negative set ND (label 0). After our adversarial learning process, we desire a strong generator that can, to the maximum extent, collapse the discriminator. Therefore, the more robust generator can be obtained via competing with the more robust discriminator. So we pre-train the discriminator until the accuracy reaches 90% or more. The pretraining of generator is similar to the discriminator; however, for the negative dataset, we use another completely different dataset NG, which makes sure the robustness of the experiment. Specially, we let the generator overfits the DS positive dataset P. The reason of this setting is that we hope the generator wrongly give high probabilities to all of the noisy DS positive samples at the beginning of the training process. Then, along with our adversarial learning, the generator learns to gradually decrease the probabilities of the false positive samples. 3.2 Generative Adversarial Training for Distant Supervision Relation Extraction The generator and the discriminator of DSGAN are both modeled by simple CNN, because CNN performs well in understanding sentence (Zeng et al., 2014), and it has less parameters than RNNbased networks. For relation extraction, the input information consists of the sentences and entity pairs; thus, as the common setting (Zeng et al., 2014; Nguyen and Grishman, 2015), we use both word embedding and position embedding to convert input instances into continuous real-valued vectors. What we desire the generator to do is to accurately recognize true positive samples. Unlike the generator applied in computer vision field (Im et al., 2016) that generates new image from the input noise, our generator just needs to discover true positive samples from the noisy DS positive dataset. Thus, it is to realize the “sampling from a probability distribution” process of the discrete GANs (Figure 2). For a input sentence sj, we define the probability of being true positive sample by generator as pG(sj). Similarly, for discriminator, the probability of being true positive sample is represented as pD(sj). We define that one epoch means that one time scanning of the entire DS positive dataset. In order to obtain more feedbacks and make the training process more efficient, we split the DS positive dataset P = {s1, s2, ..., sj, ...} into N bags B = {B1, B2, ...BN}, and the network parameters θG, θD are updated when finishing processing one bag Bi1. Based on the notion of adversarial learning, we define the objectives of the generator and the discriminator as follow, and they are alternatively trained towards their respective objectives. Generator Suppose that the generator produces a set of probability distribution {pG(sj)}j=1...|Bi| for a sentence bag Bi. Based on these probabilities, a set of sentence are sampled and we denote this set as T. T = {sj}, sj ∼pG(sj), j = 1, 2, ..., |Bi| (1) 1The bag here has the different definition from the sentence bag of an entity pair mentioned in the Section 1. 499 Epoch 𝑖 Bag%&' Bag% Bag%(' … … DS Positive Dataset 𝑠' 𝑠* 𝑠+ … 𝑠G … 𝑝' = 0.57 𝑝* = 0.02 𝑝+ = 0.83 𝑝7 = 0.26 𝑝9 = 0.90 Sampling 𝑙𝑎𝑏𝑒𝑙= 1 𝑙𝑎𝑏𝑒𝑙= 0 D 𝑟𝑒𝑤𝑎𝑟𝑑 DS positive dataset DS negative dataset 𝑙𝑎𝑏𝑒𝑙= 1 𝑙𝑎𝑏𝑒𝑙= 0 Pre-training 𝑝- = 0.7 High-confidence samples Load Parameter Low-confidence samples Generator Discriminator Figure 2: An overview of the DSGAN training pipeline. The generator (denoted by G) calculates the probability distribution over a bag of DS positive samples, and then samples according to this probability distribution. The high-confidence samples generated by G are regarded as true positive samples. The discriminator (denoted by D) receives these high-confidence samples but regards them as negative samples; conversely, the low-confidence samples are still treated as positive samples. For the generated samples, G maximizes the probability of being true positive; on the contrary, D minimizes this probability. This generated dataset T consists of the highconfidence sentences, and is regard as true positive samples by the current generator; however, it will be treated as the negative samples to train the discriminator. In order to challenge the discriminator, the objective of the generator can be formulated as maximizing the following probabilities of the generated dataset T: LG = X sj∈T log pD(sj) (2) Because LG involves a discrete sampling step, so it cannot be directly optimized by gradientbased algorithm. We adopt a common approach: the policy-gradient-based reinforcement learning. The following section will give the detailed introduction of the setting of reinforcement learning. The parameters of the generator are continually updated until reaching the convergence condition. Discriminator After the generator has generated the sample subset T , the discriminator treats them as the negative samples; conversely, the rest part F = Bi−T is treated as positive samples. So, the objective of the discriminator can be formulated as minimizing the following cross-entropy loss function: (3) LD = −( X sj∈(Bi−T) log pD(sj) + X sj∈T log(1 −pD(sj))) The update of discriminator is identical to the common binary classification problem. Naturally, it can be simply optimized by any gradient-based algorithm. What needs to be explained is that, unlike the common setting of discriminator in previous works, our discriminator loads the same pretrained parameter set at the beginning of each epoch as shown in Figure 2. There are two reasons. First, at the end of our adversarial training, what we need is a robust generator rather than a discriminator. Second, our generator is to sample data rather than generate new data from scratch; Therefore, the discriminator is relatively easy to be collapsed. So we design this new adversarial strategy: the robustest generator is yielded when the discriminator has the largest drop of performance in one epoch. In order to create the equal condition, the bag set B for each epoch is identical, including the sequence and the sentences in each 500 Algorithm 1 The DSGAN algorithm. Data: DS positive set P, DS negative set NG for generator G, DS negative set ND for discriminator D Input: Pre-trained G with parameters θG on dataset (P, NG); Pre-trained D with parameters θD on dataset (P, ND) Output: Adversarially trained generator G 1: Load parameters θG for G 2: Split P into the bag sequence P = {B1, B2, ..., Bi, ..., BN} 3: repeat 4: Load parameters θD for D 5: GG ←0, GD ←0 6: for Bi ∈P, i = 1 to N do 7: Compute the probability pG(sj) for each sentence sj in Bi 8: Obtain the generated part T by sampling according to {pG(sj)}j=1...|B| and the rest set F = Bi −T 9: GD ←−1 |P|{▽θD PT log(1 −pD(sj)) + ▽θD PF log pD(sj)} 10: θD ←θD −αDGD 11: Calculate the reward r 12: GG ← 1 |T| PT r▽θG log pG(sj) 13: θG ←θG + αGGG 14: end for 15: Compute the accuracy ACCD on ND with the current θD 16: until ACCD no longer drops 17: Save θG bag Bi. Optimizing Generator The objective of the generator is similar to the objective of the one-step reinforcement learning problem: Maximizing the expectation of a given function of samples from a parametrized probability distribution. Therefore, we use a policy gradient strategy to update the generator. Corresponding to the terminology of reinforcement learning, sj is the state and PG(sj) is the policy. In order to better reflect the quality of the generator, we define the reward r from two angles: • As the common setting in adversarial learning, for the generated sample set, we hope the confidence of being positive samples by the discriminator becomes higher. Therefore, the first component of our reward is formulated as below: r1 = 1 |T| X sj∈T pD(sj) −b1 (4) the function of b1 is to reduce variance during reinforcement learning. • The second component is from the average prediction probability of ND, ˜p = 1 |ND| X sj∈ND pD(sj) (5) ND participates the pre-training process of the discriminator, but not the adversarial training process. When the classification capacity of discriminator declines, the accuracy of being predicted as negative sample on ND gradually drops; thus, ˜p increases. In other words, the generator becomes better. Therefore, for epoch k, after processing the bag Bi, reward r2 is calculated as below, r2 = η(˜pk i −b2) where b2 =max{˜pm i }, m=1..., k−1 (6) b2 has the same function as b1. The gradient of LG can be formulated as below: ▽θDLG = X sj∈Bi Esj∼pG(sj)r▽θG log pG(sj) = 1 |T| X sj∈T r▽θG log pG(sj) (7) 501 3.3 Cleaning Noisy Dataset with Generator After our adversarial learning process, we obtain one generator for one relation type; These generators possess the capability of generating true positive samples for the corresponding relation type. Thus, we can adopt the generator to filter the noise samples from distant supervision dataset. Simply and clearly, we utilize the generator as a binary classifier. In order to reach the maximum utilization of data, we develop a strategy: for an entity pair with a set of annotated sentences, if all of these sentences are determined as false negative by our generator, this entity pair will be redistributed into the negative set. Under this strategy, the scale of distant supervision training set keeps unchanged. 4 Experiments This paper proposes an adversarial learning strategy to detect true positive samples from the noisy distant supervision dataset. Due to the absence of supervised information, we define a generator to heuristically learn to recognize true positive samples through competing with a discriminator. Therefore, our experiments are intended to demonstrate that our DSGAN method possess this capability. To this end, we first briefly introduce the dataset and the evaluation metrics. Empirically, the adversarial learning process, to some extent, has instability; Therefore, we next illustrate the convergence of our adversarial training process. Finally, we demonstrate the efficiency of our generator from two angles: the quality of the generated samples and the performance on the widely-used distant supervision relation extraction task. 4.1 Evaluation and Implementation Details The Reidel dataset2 (Riedel et al., 2010) is a commonly-used distant supervision relation extraction dataset. Freebase is a huge knowledge base including billions of triples: the entity pair and the specific relationship between them. Given these triples, the sentences of each entity pair are selected from the New York Times corpus(NYT). Entity mentions of NYT corpus are recognized by the Stanford named entity recognizer (Finkel et al., 2005). There are 52 actual relationships and a special relation NA which indicates there is no relation between head and tail entities. Entity pairs of 2http://iesl.cs.umass.edu/riedel/ecml/ Hyperparameter Value CNN Window cw, kernel size ck 3, 100 Word embedding de, |V | 50, 114042 Position embedding dp 5 Learning rate of G, D 1e-5, 1e-4 Table 1: Hyperparameter settings of the generator and the discriminator. NA are defined as the entity pairs that appear in the same sentence but are not related according to Freebase. Due to the absence of the corresponding labeled dataset, there is not a ground-truth test dataset to evaluate the performance of distant supervision relation extraction system. Under this circumstance, the previous work adopt the held-out evaluation to evaluate their systems, which can provide an approximate measure of precision without requiring costly human evaluation. It builds a test set where entity pairs are also extracted from Freebase. Similarly, relation facts that discovered from test articles are automatically compared with those in Freebase. CNN is widely used in relation classification (Santos et al., 2015; Qin et al., 2017), thus the generator and the discriminator are both modeled as a simple CNN with the window size cw and the kernel size ck. Word embedding is directly from the released word embedding matrix by Lin et al. (2016)3. Position embedding has the same setting with the previous works: the maximum distance of -30 and 30. Some detailed hyperparameter settings are displayed in Table 1. 4.2 Training Process of DSGAN Because adversarial learning is widely regarded as an effective but unstable technique, here we illustrate some property changes during the training process, in which way to indicate the learning trend of our proposed approach. We use 3 relation types as the examples: /business/person/company, /people/person/place lived and /location/neighborhood/neighborhood of. Because they are from three major classes (bussiness, people, location) of Reidel dataset and they all have enough distant-supervised instances. The first row in Figure 3 shows the classification ability change of the discriminator during training. The accuracy is calculated from the negative set ND. At the beginning of adversarial learning, the 3https://github.com/thunlp/NRE 502 0.63 0.66 0.69 0.72 0.75 0.78 0.81 0.84 0.87 0.9 0.93 0.96 0.99 6 8 10 12 14 16 18 20 22 24 26 28 F1 Score Epoch Random Pre-training DSGAN 0.7 0.73 0.76 0.79 0.82 0.85 0.88 0.91 0.94 0.97 1 1.03 6 8 10 12 14 16 18 20 22 24 26 28 F1 Score Epoch Random Pre-training DSGAN 0.93 0.94 0.95 0.96 0.97 0.98 0.99 1 1.01 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 Accuracy Bag Sequence 0.73 0.76 0.79 0.82 0.85 0.88 0.91 0.94 0.97 1 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 Accuracy Bag Sequence 0.8 0.83 0.86 0.89 0.92 0.95 0.98 1.01 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 Accuracy Bag Sequence 0.7 0.73 0.76 0.79 0.82 0.85 0.88 0.91 0.94 0.97 1 6 8 10 12 14 16 18 20 22 24 26 28 F1 Score Epoch Random Pre-training DSGAN /business/person/company /business/person/company /people/person/place_lived /people/person/place_lived /location/neighborhood/neighborhood_of /location/neighborhood/neighborhood_of Figure 3: The convergence of the DSGAN training process for 3 relation types and the performance of their corresponding generators. The figures in the first row present the performance change on ND in some specific epochs during processing the B = {B1, B2, ...BN}. Each curve stands for one epoch; The color of curves become darker as long as the epoch goes on. Because the discriminator reloads the pre-trained parameters at the beginning of each epoch, all curves start from the same point for each relation type; Along with the adversarial training, the generator gradually collapses the discriminator. The figures in the second row reflect the performance of generators from the view of the difficulty level of training with the positive datasets that are generated by different strategies. Based on the noisy DS positive dataset P, DSGAN represents that the cleaned positive dataset is generated by our DSGAN generator; Random means that the positive set is randomly selected from P; Pre-training denotes that the dataset is selected according to the prediction probability of the pre-trained generator. These three new positive datasets are in the same size. discriminator performs well on ND; moreover, ND is not used during adversarial training. Therefore, the accuracy on ND is the criterion to reflect the performance of the discriminator. In the early epochs, the generated samples from the generator increases the accuracy, because it has not possessed the ability of challenging the discriminator; however, as the training epoch increases, this accuracy gradually decreases, which means the discriminator becomes weaker. It is because the generator gradually learn to generate more accurate true positive samples in each bag. After the proposed adversarial learning process, the generator is strong enough to collapse the discriminator. Figure 4 gives more intuitive display of the trend of accuracy. Note that there is a critical point of the decline of accuracy for each presented relation types. It is because that the chance we give the generator to challenge the discriminator is just one time scanning of the noisy dataset; this critical point is yielded when the generator has already been robust enough. Thus, we stop the training process when the model reaches this critical point. To sum up, the capability of our generator can steadily increases, which indicates that DSGAN is a robust adversarial learning strategy. 4.3 Quality of Generator Due to the absence of supervised information, we validate the quality of the generator from another angle. Combining with Figure 1, for one relation type, the true positive samples must have evidently higher relevance (the cluster of purple circles). Therefore, a positive set with more true positive samples is easier to be trained; In other words, the convergence speed is faster and the fitting degree on training set is higher. Based on this , we present the comparison tests in the second row of Figure 3. We build three positive 503 0.5 0.55 0.6 0.65 0.7 0.75 0.8 0.85 0.9 0.95 1 1.05 5 15 25 35 45 55 65 75 85 Accuracy Epoch /business/person/company /people/person/place_lived /location/neighborhood/neighborhood_of Figure 4: The performance change of the discriminator on ND during the training process. Each point in the curves records the prediction accuracy on ND when finishing each epoch. We stop the training process when this accuracy no longer decreases. datasets from the noisy distant supervision dataset P: the randomly-selected positive set, the positive set base on the pre-trained generator and the positive set base on the DSGAN generator. For the pre-trained generator, the positive set is selected according to the probability of being positive from high to low. These three sets have the same size and are accompanied by the same negative set. Obviously, the positive set from the DSGAN generator yields the best performance, which indicates that our adversarial learning process is able to produce a robust true-positive generator. In addition, the pre-trained generator also has a good performance; however, compared with the DSGAN generator, it cannot provide the boundary between the false positives and the true positives. 4.4 Performance on Distant Supervision Relation Extraction Based on the proposed adversarial learning process, we obtain a generator that can recognize the true positive samples from the noisy distant supervision dataset. Naturally, the improvement of distant supervision relation extraction can provide a intuitive evaluation of our generator. We adopt the strategy mentioned in Section 3.3 to relocate the dataset. After obtaining this redistributed dataset, we apply it to train the recent state-of-the-art models and observe whether it brings further improvement for these systems. Zeng et al. (2015) and Lin et al. (2016) are both the robust models to solve wrong labeling problem of distant supervision relation extraction. According to the comparison displayed in Figure 5 and Figure 6, all four mod0 0.05 0.1 0.15 0.2 0.25 0.3 0.35 0.4 Recall 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1 Precision CNN+ONE CNN+ONE+DSGAN CNN+ATT CNN+ATT+DSGAN Figure 5: Aggregate PR curves of CNN˙based model. 0 0.05 0.1 0.15 0.2 0.25 0.3 0.35 0.4 Recall 0.4 0.5 0.6 0.7 0.8 0.9 1 Precision PCNN+ONE PCNN+ONE+DSGAN PCNN+ATT PCNN+ATT+DSGAN Figure 6: Aggregate PR curves of PCNN˙based model. els (CNN+ONE, CNN+ATT, PCNN+ONE and PCNN+ATT) achieve further improvement. Even though Zeng et al. (2015) and Lin et al. (2016) are designed to alleviate the influence of false positive samples, both of them merely focus on the noise filtering in the sentence bag of entity pairs. Zeng et al. (2015) combine at-least-one multi-instance learning with deep neural network to extract only one active sentence to represent the target entity pair; Lin et al. (2016) assign soft attention weights to the representations of all sentences of one entity pair, then employ the weighted sum of these representations to predict the relation between the target entity pair. However, from our manual inspection of Riedel dataset (Riedel et al., 2010), we found another false positive case that all the sentences of a specific entity pair are wrong; but the aforementioned methods overlook 504 Model +DSGAN p-value CNN+ONE 0.177 0.189 4.37e-04 CNN+ATT 0.219 0.226 8.36e-03 PCNN+ONE 0.206 0.221 2.89e-06 PCNN+ATT 0.253 0.264 2.34e-03 Table 2: Comparison of AUC values between previous studies and our DSGAN method. The pvalue stands for the result of t-test evaluation. this case, while the proposed method can solve this problem. Our DSGAN pipeline is independent of the relation prediction of entity pairs, so we can adopt our generator as the true-positive indicator to filter the noisy distant supervision dataset before relation extraction, which explains the origin of these further improvements in Figure 5 and Figure 6. In order to give more intuitive comparison, in Table 2, we present the AUC value of each PR curve, which reflects the area size under these curves. The larger value of AUC reflects the better performance. Also, as can be seen from the result of t-test evaluation, all the p-values are less than 5e-02, so the improvements are obvious. 5 Conclusion Distant supervision has become a standard method in relation extraction. However, while it brings the convenience, it also introduces noise in distantly labeled sentences. In this work, we propose the first generative adversarial training method for robust distant supervision relation extraction. More specifically, our framework has two components: a generator that generates true positives, and a discriminator that tries to classify positive and negative data samples. With adversarial training, our goal is to gradually decrease the performance of the discriminator, while the generator improves the performance for predicting true positives when reaching equilibrium. Our approach is model-agnostic, and thus can be applied to any distant supervision model. Empirically, we show that our method can significantly improve the performances of many competitive baselines on the widely used New York Time dataset. Acknowledge This work was supported by National Natural Science Foundation of China (61702047), Beijing Natural Science Foundation (4174098), the Fundamental Research Funds for the Central Universities (2017RC02) and National Natural Science Foundation of China (61703234) References Razvan Bunescu and Raymond J Mooney. 2005. Subsequence kernels for relation extraction. In NIPS, pages 171–178. Liwei Cai and William Yang Wang. 2017. Kbgan: Adversarial learning for knowledge graph embeddings. arXiv preprint arXiv:1711.04071. Jenny Rose Finkel, Trond Grenager, and Christopher Manning. 2005. Incorporating non-local information into information extraction systems by gibbs sampling. In Proceedings of the 43rd annual meeting on association for computational linguistics, pages 363–370. Association for Computational Linguistics. Ian Goodfellow, Jean Pouget-Abadie, Mehdi Mirza, Bing Xu, David Warde-Farley, Sherjil Ozair, Aaron Courville, and Yoshua Bengio. 2014. Generative adversarial nets. In Advances in neural information processing systems, pages 2672–2680. Zhou GuoDong, Su Jian, Zhang Jie, and Zhang Min. 2005. Exploring various knowledge in relation extraction. In Proceedings of the 43rd annual meeting on association for computational linguistics, pages 427–434. Association for Computational Linguistics. Raphael Hoffmann, Congle Zhang, Xiao Ling, Luke Zettlemoyer, and Daniel S Weld. 2011. Knowledgebased weak supervision for information extraction of overlapping relations. In Proceedings of the 49th Annual Meeting of the Association for Computational Linguistics: Human Language TechnologiesVolume 1, pages 541–550. Association for Computational Linguistics. Daniel Jiwoong Im, Chris Dongjoo Kim, Hui Jiang, and Roland Memisevic. 2016. Generating images with recurrent adversarial networks. arXiv preprint arXiv:1602.05110. Guoliang Ji, Kang Liu, Shizhu He, Jun Zhao, et al. 2017. Distant supervision for relation extraction with sentence-level attention and entity descriptions. In AAAI, pages 3060–3066. Yankai Lin, Shiqi Shen, Zhiyuan Liu, Huanbo Luan, and Maosong Sun. 2016. Neural relation extraction with selective attention over instances. In ACL (1). Mike Mintz, Steven Bills, Rion Snow, and Dan Jurafsky. 2009. Distant supervision for relation extraction without labeled data. In Proceedings of the Joint Conference of the 47th Annual Meeting of the ACL and the 4th International Joint Conference on Natural Language Processing of the AFNLP: Volume 2-Volume 2, pages 1003–1011. Association for Computational Linguistics. 505 Thien Huu Nguyen and Ralph Grishman. 2015. Event detection and domain adaptation with convolutional neural networks. In ACL (2), pages 365–371. Pengda Qin, Weiran Xu, and Jun Guo. 2017. Designing an adaptive attention mechanism for relation classification. In Neural Networks (IJCNN), 2017 International Joint Conference on, pages 4356– 4362. IEEE. Alec Radford, Luke Metz, and Soumith Chintala. 2015. Unsupervised representation learning with deep convolutional generative adversarial networks. arXiv preprint arXiv:1511.06434. Sebastian Riedel, Limin Yao, and Andrew McCallum. 2010. Modeling relations and their mentions without labeled text. In Machine Learning and Knowledge Discovery in Databases, pages 148–163. Springer. Benjamin Roth, Tassilo Barth, Michael Wiegand, and Dietrich Klakow. 2013. A survey of noise reduction methods for distant supervision. In Proceedings of the 2013 workshop on Automated knowledge base construction, pages 73–78. ACM. Cicero Nogueira dos Santos, Bing Xiang, and Bowen Zhou. 2015. Classifying relations by ranking with convolutional neural networks. arXiv preprint arXiv:1504.06580. Yatian Shen and Xuanjing Huang. 2016. Attentionbased convolutional neural network for semantic relation extraction. In Proceedings of COLING 2016, the 26th International Conference on Computational Linguistics: Technical Papers. Mihai Surdeanu, Julie Tibshirani, Ramesh Nallapati, and Christopher D Manning. 2012. Multi-instance multi-label learning for relation extraction. In Proceedings of the 2012 Joint Conference on Empirical Methods in Natural Language Processing and Computational Natural Language Learning, pages 455– 465. Association for Computational Linguistics. Shingo Takamatsu, Issei Sato, and Hiroshi Nakagawa. 2012. Reducing wrong labels in distant supervision for relation extraction. In Proceedings of the 50th Annual Meeting of the Association for Computational Linguistics: Long Papers-Volume 1, pages 721–729. Association for Computational Linguistics. Jun Wang, Lantao Yu, Weinan Zhang, Yu Gong, Yinghui Xu, Benyou Wang, Peng Zhang, and Dell Zhang. 2017. Irgan: A minimax game for unifying generative and discriminative information retrieval models. In Proceedings of the 40th International ACM SIGIR conference on Research and Development in Information Retrieval, pages 515–524. ACM. Yongqin Xian, Bernt Schiele, and Zeynep Akata. 2017. Zero-shot learning-the good, the bad and the ugly. arXiv preprint arXiv:1703.04394. Dmitry Zelenko, Chinatsu Aone, and Anthony Richardella. 2003. Kernel methods for relation extraction. Journal of machine learning research, 3(Feb):1083–1106. Daojian Zeng, Kang Liu, Yubo Chen, and Jun Zhao. 2015. Distant supervision for relation extraction via piecewise convolutional neural networks. In Proceedings of the 2015 Conference on Empirical Methods in Natural Language Processing (EMNLP), Lisbon, Portugal, pages 17–21. Daojian Zeng, Kang Liu, Siwei Lai, Guangyou Zhou, Jun Zhao, et al. 2014. Relation classification via convolutional deep neural network. In COLING, pages 2335–2344.
2018
46
Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Long Papers), pages 506–514 Melbourne, Australia, July 15 - 20, 2018. c⃝2018 Association for Computational Linguistics 506 Extracting Relational Facts by an End-to-End Neural Model with Copy Mechanism Xiangrong Zeng12, Daojian Zeng3, Shizhu He1, Kang Liu12, Jun Zhao12 1National Laboratory of Pattern Recognition (NLPR), Institute of Automation Chinese Academy of Sciences, Beijing, 100190, China 2University of Chinese Academy of Sciences, Beijing, 100049, China 3Changsha University of Science & Technology, Changsha, 410114, China {xiangrong.zeng, shizhu.he, kliu, jzhao}@nlpr.ia.ac.cn [email protected] Abstract The relational facts in sentences are often complicated. Different relational triplets may have overlaps in a sentence. We divided the sentences into three types according to triplet overlap degree, including Normal, EntityPairOverlap and SingleEntiyOverlap. Existing methods mainly focus on Normal class and fail to extract relational triplets precisely. In this paper, we propose an end-to-end model based on sequence-to-sequence learning with copy mechanism, which can jointly extract relational facts from sentences of any of these classes. We adopt two different strategies in decoding process: employing only one united decoder or applying multiple separated decoders. We test our models in two public datasets and our model outperform the baseline method significantly. 1 Introduction Recently, to build large structural knowledge bases (KB), great efforts have been made on extracting relational facts from natural language texts. A relational fact is often represented as a triplet which consists of two entities (an entity pair) and a semantic relation between them, such as < Chicago, country, UnitedStates >. So far, most previous methods mainly focused on the task of relation extraction or classification which identifies the semantic relations between two pre-assigned entities. Although great progresses have been made (Hendrickx et al., 2010; Zeng et al., 2014; Xu et al., 2015a,b), they all assume that the entities are identified beforehand and neglect the extraction of entities. To extract both of entities and relations, early works(Zelenko et al., 2003; Chan and Roth, 2011) adopted a pipeline Normal S1: Chicago is located in the United States. {<Chicago, country, United States>} EPO S2: News of the list’s existence unnerved officials in Khartoum, Sudan ’s capital. {<Sudan, capital, Khartoum>, <Sudan, contains, Khartoum>} SEO S3: Aarhus airport serves the city of Aarhus who's leader is Jacob Bundsgaard. {<Aarhus, leaderName, Jacob Bundsgaard>, <Aarhus Airport, cityServed, Aarhus>} Chicago United States country Sudan Khartoum contains capital Aarhus Aarhus Airport Jacob Bundsgaard Figure 1: Examples of Normal, EntityPairOverlap (EPO) and SingleEntityOverlap (SEO) classes. The overlapped entities are marked in yellow. S1 belongs to Normal class because none of its triplets have overlapped entities; S2 belongs to EntityPairOverlap class since the entity pair < Sudan, Khartoum > of it’s two triplets are overlapped; And S3 belongs to SingleEntityOverlap class because the entity Aarhus of it’s two triplets are overlapped and these two triplets have no overlapped entity pair. manner, where they first conduct entity recognition and then predict relations between extracted entities. However, the pipeline framework ignores the relevance of entity identification and relation prediction (Li and Ji, 2014). Recent works attempted to extract entities and relations jointly. Yu and Lam (2010); Li and Ji (2014); Miwa and Sasaki (2014) designed several elaborate features to construct the bridge between these two subtasks. Similar to other natural language processing (NLP) tasks, they need complicated feature engineering and heavily rely on pre-existing NLP tools for feature extraction. 507 Recently, with the success of deep learning on many NLP tasks, it is also applied on relational facts extraction. Zeng et al. (2014); Xu et al. (2015a,b) employed CNN or RNN on relation classification. Miwa and Bansal (2016); Gupta et al. (2016); Zhang et al. (2017) treated relation extraction task as an end-to-end (end2end) tablefilling problem. Zheng et al. (2017) proposed a novel tagging schema and employed a Recurrent Neural Networks (RNN) based sequence labeling model to jointly extract entities and relations. Nevertheless, the relational facts in sentences are often complicated. Different relational triplets may have overlaps in a sentence. Such phenomenon makes aforementioned methods, whatever deep learning based models and traditional feature engineering based joint models, always fail to extract relational triplets precisely. Generally, according to our observation, we divide the sentences into three types according to triplet overlap degree, including Normal, EntityPairOverlap (EPO) and SingleEntityOverlap (SEO). As shown in Figure 1, a sentence belongs to Normal class if none of its triplets have overlapped entities. A sentence belongs to EntityPairOverlap class if some of its triplets have overlapped entity pair. And a sentence belongs to SingleEntityOverlap class if some of its triplets have an overlapped entity and these triplets don’t have overlapped entity pair. In our knowledge, most previous methods focused on Normal type and seldom consider other types. Even the joint models based on neural network (Zheng et al., 2017), it only assigns a single tag to a word, which means one word can only participate in at most one triplet. As a result, the triplet overlap issue is not actually addressed. To address the aforementioned challenge, we aim to design a model that could extract triplets, including entities and relations, from sentences of Normal, EntityPairOverlap and SingleEntityOverlap classes. To handle the problem of triplet overlap, one entity must be allowed to freely participate in multiple triplets. Different from previous neural methods, we propose an end2end model based on sequence-to-sequence (Seq2Seq) learning with copy mechanism, which can jointly extract relational facts from sentences of any of these classes. Specially, the main component of this model includes two parts: encoder and decoder. The encoder converts a natural language sentence (the source sentence) into a fixed length semantic vector. Then, the decoder reads in this vector and generates triplets directly. To generate a triplet, firstly, the decoder generates the relation. Secondly, by adopting the copy mechanism, the decoder copies the first entity (head entity) from the source sentence. Lastly, the decoder copies the second entity (tail entity) from the source sentence. In this way, multiple triplets can be extracted (In detail, we adopt two different strategies in decoding process: employing only one unified decoder (OneDecoder) to generate all triplets or applying multiple separated decoders (MultiDecoder) and each of them generating one triplet). In our model, one entity is allowed to be copied several times when it needs to participate in different triplets. Therefore, our model could handle the triplet overlap issue and deal with both of EntityPairOverlap and SingleEntityOverlap sentence types. Moreover, since extracting entities and relations in a single end2end neural network, our model could extract entities and relations jointly. The main contributions of our work are as follows: • We propose an end2end neural model based on sequence-to-sequence learning with copy mechanism to extract relational facts from sentences, where the entities and relations could be jointly extracted. • Our model could consider the relational triplet overlap problem through copy mechanism. In our knowledge, the relational triplet overlap problem has never been addressed before. • We conduct experiments on two public datasets. Experimental results show that we outperforms the state-of-the-arts with 39.8% and 31.1% improvements respectively. 2 Related Work By giving a sentence with annotated entities, Hendrickx et al. (2010); Zeng et al. (2014); Xu et al. (2015a,b) treat identifying relations in sentences as a multi-class classification problem. Zeng et al. (2014) among the first to introduce CNN into relation classification. Xu et al. (2015a) and Xu et al. (2015b) learned relation representations from shortest dependency paths through a CNN or RNN. Despite their success, these models ignore the extraction of the entities from sentences and could not truly extract relational facts. 508 Born_in Located_in Contains …… News of the list 's existence unnerved officials in Khartoum , Sudan 's capital GO Capital Capital Sudan Khartoum Khartoum Contains Sudan Sudan Khartoum Relation Prediction 0.9 Born_in 𝑟ଵ 𝑟ଶ Located_in 𝑟ଷ Contains 𝑟ସ …… Predicted relation Attention Vector 𝐜௧ Decoder Encoder {<Capital,Sudan,Khartoum>, < Contains,Sudan,Khartoum>} Extracted triplets 0.8 Entity Copy 𝐩௘ Copied entity News of the list 's existence unnerved officials in Khartoum , Sudan 's capital 𝐩௥ 𝐬 𝐨ଵ ୈ 𝐨ଶ ୈ 𝐨ଷ ୈ 𝐨ସ ୈ 𝐨ହ ୈ 𝐨଺ ୈ 𝐡଴ ୈ 𝐡ଵ ୈ 𝐡ଶ ୈ 𝐡ଷ ୈ 𝐡ସ ୈ 𝐡ହ ୈ Figure 2: The overall structure of OneDecoder model. A bi-directional RNN is used to encode the source sentence and then a decoder is used to generate triples directly. The relation is predicted and the entity is copied from source sentence. By giving a sentence without any annotated entities, researchers proposed several methods to extract both entities and relations. Pipeline based methods, like Zelenko et al. (2003) and Chan and Roth (2011), neglected the relevance of entity extraction and relation prediction. To resolve this problem, several joint models have been proposed. Early works (Yu and Lam, 2010; Li and Ji, 2014; Miwa and Sasaki, 2014) need complicated process of feature engineering and heavily depends on NLP tools for feature extraction. Recent models, like Miwa and Bansal (2016); Gupta et al. (2016); Zhang et al. (2017); Zheng et al. (2017), jointly extract the entities and relations based on neural networks. These models are based on tagging framework, which assigns a relational tag to a word or a word pair. Despite their success, none of these models can fully handle the triplet overlap problem mentioned in the first section. The reason is in their hypothesis, that is, a word (or a word pair) can only be assigned with just one relational tag. This work is based on sequence-to-sequence learning with copy mechanism, which have been adopted for some NLP tasks. Dong and Lapata (2016) presented a method based on an attentionenhanced and encoder-decoder model, which encodes input utterances and generates their logical forms. Gu et al. (2016); He et al. (2017) applied copy mechanism to sentence generation. They copy a segment from the source sequence to the target sequence. 3 Our Model In this section, we introduce a differentiable neural model based on Seq2Seq learning with copy mechanism, which is able to extract multiple relational facts in an end2end fashion. Our neural model encodes a variable-length sentence into a fixed-length vector representation first and then decodes this vector into the corresponding relational facts (triplets). When decoding, we can either decode all triplets with one unified decoder or decode every triplet with a separated decoder. We denote them as OneDecoder model and MultiDecoder model separately. 509 3.1 OneDecoder Model The overall structure of OneDecoder model is shown in Figure 2. 3.1.1 Encoder To encode a sentence s = [w1, .., wn], where wt represent the t-th word and n is the source sentence length, we first turn it into a matrix X = [x1, · · · , xn], where xt is the embedding of t-th word. The canonical RNN encoder reads this matrix X sequentially and generates output oE t and hidden state hE t in time step t(1 ≤t ≤n) by oE t , hE t = f(xt, hE t−1) (1) where f(· ) represents the encoder function. Following (Gu et al., 2016), our encoder uses a bi-directional RNN (Chung et al., 2014) to encode the input sentence. The forward and backward RNN obtain output sequence { −→ oE 1 , · · · , −→ oE n } and { ←− oE n , · · · , ←− oE 1 }, respectively. We then concatenate −→ oE t and ←−−−− oE n−t+1 to represent the t-th word. We use OE = [oE 1 , ..., oE n ], where oE t = [ −→ oE t ; ←−−−− oE n−t+1], to represent the concatenate result. Similarly, the concatenation of forward and backward RNN hidden states are used as the representation of sentence, that is s = [ −→ hE n ; ←− hE n ] 3.1.2 Decoder The decoder is used to generate triplets directly. Firstly, the decoder generates a relation for the triplet. Secondly, the decoder copies an entity from the source sentence as the first entity of the triplet. Lastly, the decoder copies the second entity from the source sentence. Repeat this process, the decoder could generate multiple triplets. Once all valid triplets are generated, the decoder will generate NA triplets, which means “stopping” and is similar to the “eos” symbol in neural sentence generation. Note that, a NA triplet is composed of an NA-relation and an NA-entity pair. As shown in Figure 3 (a), in time step t (1 ≤ t), we calculate the decoder output oD t and hidden state hD t as follows: oD t , hD t = g(ut, hD t−1) (2) where g(· ) is the decoder function and hD t−1 is the hidden state of time step t −1. We initialize hD 0 with the representation of source sentence s. ut is the decoder input in time step t and we calculate it as: ut = [vt; ct]· Wu (3) where ct is the attention vector and vt is the embedding of copied entity or predicted relation in time step t −1. Wu is a weight matrix. Attention Vector. The attention vector ct is calculated as follows: ct = n X i=1 αi × oE i (4) α = softmax(β) (5) βi = selu([hD t−1; oE i ]· wc) (6) where oE i is the output of encoder in time step i, α = [α1, ..., αn] and β = [β1, ..., βn] are vectors, wc is a weight vector. selu(· ) is activation function (Klambauer et al., 2017). After we get decoder output oD t in time step t (1 ≤t), if t%3 = 1 (that is t = 1, 4, 7, ...), we use oD t to predict a relation, which means we are decoding a new triplet. Otherwise, if t%3 = 2 (that is t = 2, 5, 8, ...), we use oD t to copy the first entity from the source sentence, and if t%3 = 0 (that is t = 3, 6, 9, ...), we copy the second entity. Predict Relation. Suppose there are m valid relations in total. We use a fully connected layer to calculate the confidence vector qr = [qr 1, ..., qr m] of all valid relations: qr = selu(oD t · Wr + br) (7) where Wr is the weight matrix and br is the bias. When predict the relation, it is possible to predict the NA-relation when the model try to generate NA-triplet. To take this into consideration, we calculate the confidence value of NA-relation as: qNA = selu(oD t · WNA + bNA) (8) where WNA is the weight matrix and bNA is the bias. We then concatenate qr and qNA to form the confidence vector of all relations (including the NA-relation) and apply softmax to obtain the probability distribution pr = [pr 1, ..., pr m+1] as: pr = softmax([qr; qNA]) (9) We select the relation with the highest probability as the predict relation and use it’s embedding as the next time step input vt+1. 510 𝐬 𝐜ଵ 𝐨ଵ ஽భ 𝐯ଵ 𝐡ଵ ஽భ 𝐜ଶ 𝐨ଶ ஽భ 𝐯ଶ 𝐡ଶ ஽భ 𝐜ଷ 𝐨ଷ ஽భ 𝐯ଷ 𝐡ଷ ஽భ 𝐬 𝐜ସ 𝐨ସ ஽మ 𝐯ସ 𝐡ସ ஽మ 𝐜ହ 𝐨ହ ஽మ 𝐯ହ 𝐡ହ ஽మ 𝐜଺ 𝐨଺ ஽మ 𝐯଺ 𝐡଺ ஽మ 𝐜ଵ 𝐬 𝐨ଵ ஽ 𝐯ଵ 𝐡ଵ ஽ 𝐜ଶ 𝐨ଶ ஽ 𝐯ଶ 𝐡ଶ ஽ 𝐜ଷ 𝐨ଷ ஽ 𝐯ଷ 𝐡ଷ ஽ 𝐜ସ 𝐨ସ ஽ 𝐯ସ 𝐡ସ ஽ 𝐜ହ 𝐨ହ ஽ 𝐯ହ 𝐡ହ ஽ 𝐜଺ 𝐨଺ ஽ 𝐯଺ 𝐡଺ ஽ (a) (b) Figure 3: The inputs and outputs of the decoder(s) of OneDecoder model and MultiDecoder model. (a) is the decoder of OneDecoder model. As we can see, only one decoder (the green rectangle with shadows) is used and this encoder is initialized with the sentence representation s. (b) is the decoders of MultiDecoder model. There are two decoders (the green rectangle and blue rectangle with shadows). The first decoder is initialized with s; Other decoder(s) are initialized with s and previous decoder’s state. Copy the First Entity. To copy the first entity, we calculate the confidence vector qe = [qe 1, ..., qe n] of all words in source sentence as: qe i = selu([oD t ; oE i ]· we) (10) where we is the weight vector. Similar with the relation prediction, we concatenate qe and qNA to form the confidence vector and apply softmax to obtain the probability distribution pe = [pe 1, ..., pe n+1]: pe = softmax([qe; qNA]) (11) Similarly, We select the word with the highest probability as the predict the word and use it’s embedding as the next time step input vt+1. Copy the Second Entity. Copy the second entity is almost the same as copy the first entity. The only difference is when copying the second entity, we cannot copy the first entity again. This is because in a valid triplet, two entities must be different. Suppose the first copied entity is the k-th word in the source sentence, we introduce a mask vector M with n (n is the length of source sentence) elements, where: Mi = ( 1, i ̸= k 0, i = k (12) then we calculate the probability distribution pe as: pe = softmax([M ⊗qe; qNA]) (13) where ⊗is element-wise multiplication. Just like copy the first entity, We select the word with the highest probability as the predict word and use it’s embedding as the next time step input vt+1. 3.2 MultiDecoder Model MultiDecoder model is an extension of the proposed OneDecoder model. The main difference is when decoding triplets, MultiDecoder model decode triplets with several separated decoders. Figure 3 (b) shows the inputs and outputs of decoders of MultiDecoder model. There are two decoders (the green and blue rectangle with shadows). Decoders work in a sequential order: the first decoder generate the first triplet and then the second decoder generate the second triplet. Similar with Eq 2, we calculate the hidden state hDi t and output oDi t of i-th (1 ≤i) decoder in time step t as follows: oDi t , hDi t = ( gDi(ut, hDi t−1), t%3 = 2, 0 gDi(ut, ˆh Di t−1), t%3 = 1 (14) gDi(· ) is the decoder function of decoder i. ut is the decoder input in time step t and we calculated it as Eq 3. hDi t−1 is the hidden state of i-th decoder in time step t −1. ˆh Di t−1 is the initial hidden state of i-th decoder, which is calculated as follows: ˆh Di t−1 = ( s, i = 1 1 2(s + hDi−1 t−1 ), i > 1 (15) 511 Class NYT WebNLG Train Test Train Test Normal 37013 3266 1596 246 EPO 9782 978 227 26 SEO 14735 1297 3406 457 ALL 56195 5000 5019 703 Table 1: The number of sentences of Normal, EntityPairOverlap (EPO) and SingleEntityOverlap (SEO) classes. It’s worthy noting that a sentence can belongs to both EPO class and SEO class. 3.3 Training Both OneDecoder and MultiDecoder models are trained with the negative log-likelihood loss function. Given a batch of data with B sentences S = {s1, ..., sB} with the target results Y = {y1, ..., yB}, where yi = [y1 i , ..., yT i ] is the target result of si, the loss function is defined as follows: L = 1 B × T B X i=1 T X t=1 −log(p(yt i|y<t i , si, θ)) (16) T is the maximum time step of decoder. p(x|y) is the conditional probability of x given y. θ denotes parameters of the entire model. 4 Experiments 4.1 Dataset To evaluate the performance of our methods, we conduct experiments on two widely used datasets. The first is New York Times (NYT) dataset, which is produced by distant supervision method (Riedel et al., 2010). This dataset consists of 1.18M sentences sampled from 294k 1987-2007 New York Times news articles. There are 24 valid relations in total. In this paper, we treat this dataset as supervised data as the same as Zheng et al. (2017). We filter the sentences with more than 100 words and the sentences containing no positive triplets, and 66195 sentences are left. We randomly select 5000 sentences from it as the test set, 5000 sentences as the validation set and the rest 56195 sentences are used as train set. The second is WebNLG dataset (Gardent et al., 2017). It is originally created for Natural Language Generation (NLG) task. This dataset contains 246 valid relations. In this dataset, a instance including a group of triplets and several standard sentences (written by human). Every standard sentence contains all triplets of this instance. We only use the first standard sentence in our experiments and we filter out the instances if all entities of triplets are not found in this standard sentence. The origin WebNLG dataset contains train set and development set. In our experiments, we treat the origin development set as test set and randomly split the origin train set into validation set and train set. After filtering and splitting, the train set contains 5019 instances, the test set contains 703 instances and the validation set contains 500 instances. The number of sentences of every class in NYT and WebNLG dataset are shown in Table 1. It’s worthy noting that a sentence can belongs to both EntityPairOverlap class and SingleEntityOverlap class. 4.2 Settings In our experiments, for both dataset, we use LSTM (Hochreiter and Schmidhuber, 1997) as the model cell; The cell unit number is set to 1000; The embedding dimension is set to 100; The batch size is 100 and the learning rate is 0.001; The maximum time steps T is 15, which means we predict at most 5 triplets for each sentence (therefore, there are 5 decoders in MultiDecoder model). These hyperparameters are tuned on the validation set. We use Adam (Kingma and Ba, 2015) to optimize parameters and we stop the training when we find the best result in the validation set. 4.3 Baseline and Evaluation Metrics We compare our models with NovelTagging model (Zheng et al., 2017), which conduct the best performance on relational facts extraction. We directly run the code released by Zheng et al. (2017) to acquire the results. Following Zheng et al. (2017), we use the standard micro Precision, Recall and F1 score to evaluate the results. Triplets are regarded as correct when it’s relation and entities are both correct. When copying the entity, we only copy the last word of it. A triplet is regarded as NA-triplet when and only when it’s relation is NA-relation and it has an NA-entity pair. The predicted NA-triplets will be excluded. 4.4 Results Table 2 shows the Precision, Recall and F1 value of NovelTagging model (Zheng et al., 2017) and our OneDecoder and MultiDecoder models. 512 Model NYT WebNLG Precision Recall F1 Precision Recall F1 NovelTagging 0.624 0.317 0.420 0.525 0.193 0.283 OneDecoder 0.594 0.531 0.560 0.322 0.289 0.305 MultiDecoder 0.610 0.566 0.587 0.377 0.364 0.371 Table 2: Results of different models in NYT dataset and WebNLG dataset. Precision Recall F1 0 1 0.777 0.696 0.734 0.641 0.686 0.663 0.632 0.690 0.660 Normal Class NovelTagging OneDecoder MultiDecoder Precision Recall F1 0 1 0.374 0.085 0.138 0.598 0.485 0.536 0.607 0.503 0.550 EntityPairOverlap Class NovelTagging OneDecoder MultiDecoder Precision Recall F1 0 1 0.432 0.111 0.176 0.480 0.353 0.406 0.548 0.436 0.486 SingleEntityOverlap Class NovelTagging OneDecoder MultiDecoder Figure 4: Results of NovelTagging, OneDecoder, and MultiDecoder model in Normal, EntityPairOverlap and SingleEntityOverlap classes in NYT dataset. As we can see, in NYT dataset, our MultiDecoder model achieves the best F1 score, which is 0.587. There is 39.8% improvement compared with the NovelTagging model, which is 0.420. Besides, our OneDecoder model also outperforms the NovelTagging model. In the WebNLG dataset, MultiDecoder model achieves the highest F1 score (0.371). MultiDecoder and OneDecoder models outperform the NovelTagging model with 31.1% and 7.8% improvements, respectively. These observations verify the effectiveness of our models. We can also observe that, in both NYT and WebNLG dataset, the NovelTagging model achieves the highest precision value and lowest recall value. By contrast, our models are much more balanced. We think that the reason is in the structure of the proposed models. The NovelTagging method finds triplets through tagging the words. However, they assume that only one tag could be assigned to just one word. As a result, one word can participate at most one triplet. Therefore, the NovelTagging model can only recall a small number of triplets, which harms the recall performance. Different from the NovelTagging model, our models apply copy mechanism to find entities for a triplet, and a word can be copied many times when this word needs to participate in multiple different triplets. Not surprisingly, our models recall more triplets and achieve higher recall value. Further experiments verified this. 4.5 Detailed Results on Different Sentence Types To verify the ability of our models in handling the overlapping problem, we conduct further experiments on NYT dataset. Figure 4 shows the results of NovelTagging, OneDecoder and MultiDecoder model in Normal, EntityPairOverlap and SingleEntityOverlap classes. As we can see, our proposed models perform much better than NovelTagging model in EntityPairOverlap class and SingleEntityOverlap classes. Specifically, our models achieve much higher performance on all metrics. Another observation is that NovelTagging model achieves the best performance in Normal class. This is because the NovelTagging model is designed more suitable for Normal class. However, our proposed models are more suitable for the triplet overlap issues. Furthermore, it is still difficult for our models to judge how many triplets are needed for the input sentence. As a result, there is a loss in our models for Normal class. Nevertheless, the overall perfor513 1 2 3 4 >=5 Triplets number of a sentence 0.0 0.8 0.777 0.471 0.296 0.464 0.295 0.645 0.572 0.581 0.546 0.316 0.636 0.624 0.586 0.584 0.409 Precision NovelTagging OneDecoder MultiDecoder 1 2 3 4 >=5 Triplets number of a sentence 0.0 0.8 0.697 0.191 0.074 0.083 0.038 0.698 0.488 0.434 0.440 0.149 0.699 0.551 0.468 0.495 0.237 Recall NovelTagging OneDecoder MultiDecoder 1 2 3 4 >=5 Triplets number of a sentence 0.0 0.8 0.735 0.272 0.118 0.141 0.068 0.671 0.526 0.497 0.487 0.203 0.666 0.586 0.520 0.536 0.300 F1 NovelTagging OneDecoder MultiDecoder Figure 5: Relation Extraction from sentences that contains different number of triplets. We divide the sentences of NYT test set into 5 subclasses. Each class contains sentences that have 1,2,3,4 or >= 5 triplets. Model NYT WebNLG OneDecoder 0.858 0.745 MultiDecoder 0.862 0.821 Table 3: F1 values of entity generation. mance of the proposed models still outperforms NoverTagging. Moreover, we notice that the whole extracted performance of EntityPairOverlap and SingleEntityOverlap class is lower than that in Normal class. It proves that extracting relational facts from EntityPairOverlap and SingleEntityOverlap classes are much more challenging than from Normal class. We also compare the model’s ability of extracting relations from sentences that contains different number of triplets. We divide the sentences in NYT test set into 5 subclasses. Each class contains sentences that has 1,2,3,4 or >= 5 triplets. The results are shown in Figure 5. When extracting relation from sentences that contains 1 triplets, NovelTagging model achieve the best performance. However, when the number of triplets increases, the performance of NovelTagging model decreases significantly. We can also observe the huge decrease of recall value of NovelTagging model. These experimental results demonstrate the ability of our model in handling multiple relation extraction. 4.6 OneDecoder vs. MultiDecoder As shown in the previous experiments (Table 2, Figure 4 and Figure 5), our MultiDecoder model performs better then OneDecoder model and NovModel NYT WebNLG OneDecoder 0.874 0.759 MultiDecoder 0.870 0.751 Table 4: F1 values of relation generation. elTagging model. To find out why MultiDecoder model performs better than OneDecoder model, we analyzed their ability of entity generation and relation generation. The experiment results are shown in Table 3 and Table 4. We can observe that on both NYT and WebNLG datasets, these two models have comparable abilities on relation generation. However, MultiDecoder performs better than OneDecoder model when generating entities. We think that it is because MultiDecoder model utilizes different decoder to generate different triplets so that the entity generation results could be more diverse. Conclusions and Future Work In this paper, we proposed an end2end neural model based on Seq2Seq learning framework with copy mechanism for relational facts extraction. Our model can jointly extract relation and entity from sentences, especially when triplets in the sentences are overlapped. Moreover, we analyze the different overlap types and adopt two strategies for this issue, including one unified decoder and multiple separated decoders. We conduct experiments on two public datasets to evaluate the effectiveness of our models. The experiment results show that our models outperform the baseline method signif514 icantly and our models can extract relational facts from all three classes. This challenging task is far from being solved. Our future work will concentrate on how to improve the performance further. Another future work is test our model in other NLP tasks like event extraction. Acknowledgments The authors thank Yi Chang from Huawei Tech. Ltm for his helpful discussions. This work is supported by the Natural Science Foundation of China (No.61533018 and No.61702512). This work is also supported in part by Beijing Unisound Information Technology Co., Ltd, and Huawei Innovation Research Program of Huawei Tech. Ltm. References Yee Seng Chan and Dan Roth. 2011. Exploiting syntactico-semantic structures for relation extraction. In Proceedings of ACL, pages 551–560. Junyoung Chung, Caglar Gulcehre, KyungHyun Cho, and Yoshua Bengio. 2014. Empirical evaluation of gated recurrent neural networks on sequence modeling. arXiv preprint arXiv:1412.3555. Li Dong and Mirella Lapata. 2016. Language to logical form with neural attention. In Proceedings of ACL, pages 33–43. Claire Gardent, Anastasia Shimorina, Shashi Narayan, and Laura Perez-Beltrachini. 2017. Creating training corpora for nlg micro-planners. In Proceedings of ACL, pages 179–188. Jiatao Gu, Zhengdong Lu, Hang Li, and Victor O.K. Li. 2016. Incorporating copying mechanism in sequence-to-sequence learning. In Proceedings of ACL, pages 1631–1640. Pankaj Gupta, Hinrich Schtze, and Bernt Andrassy. 2016. Table filling multi-task recurrent neural network for joint entity and relation extraction. In Proceedings of COLING, pages 2537–2547. Shizhu He, Cao Liu, Kang Liu, and Jun Zhao. 2017. Generating natural answers by incorporating copying and retrieving mechanisms in sequence-tosequence learning. In Proceedings of ACL, pages 199–208. Iris Hendrickx, Su Nam Kim, Zornitsa Kozareva, Preslav Nakov, Diarmuid ´O S´eaghdha, Sebastian Pad´o, Marco Pennacchiotti, Lorenza Romano, and Stan Szpakowicz. 2010. Semeval-2010 task 8: Multi-way classification of semantic relations between pairs of nominals. In Proceedings of the 5th International Workshop on Semantic Evaluation, pages 33–38. Sepp Hochreiter and J¨urgen Schmidhuber. 1997. Long short-term memory. Neural computation, 9(8):1735–1780. Diederik P. Kingma and Jimmy Lei Ba. 2015. Adam: a Method for Stochastic Optimization. In Proceedings of ICLR, pages 1–15. G¨unter Klambauer, Thomas Unterthiner, Andreas Mayr, and Sepp Hochreiter. 2017. Self-normalizing neural networks. In Advances in NIPS, pages 971– 980. Qi Li and Heng Ji. 2014. Incremental joint extraction of entity mentions and relations. In Proceedings of ACL, pages 402–412. Makoto Miwa and Mohit Bansal. 2016. End-to-end relation extraction using lstms on sequences and tree structures. In Proceedings of ACL, pages 1105– 1116. Makoto Miwa and Yutaka Sasaki. 2014. Modeling joint entity and relation extraction with table representation. In Proceedings of EMNLP, pages 1858– 1869. Sebastian Riedel, Limin Yao, and Andrew McCallum. 2010. Modeling relations and their mentions without labeled text. In Proceedings of ECML PKDD, pages 148–163. Kun Xu, Yansong Feng, Songfang Huang, and Dongyan Zhao. 2015a. Semantic relation classification via convolutional neural networks with simple negative sampling. In Proceedings of EMNLP, pages 536–540. Yan Xu, Lili Mou, Ge Li, Yunchuan Chen, Hao Peng, and Zhi Jin. 2015b. Classifying relations via long short term memory networks along shortest dependency paths. In Proceedings of EMNLP, pages 1785–1794. Xiaofeng Yu and Wai Lam. 2010. Jointly identifying entities and extracting relations in encyclopedia text via a graphical model approach. In Proceedings of COLING, pages 1399–1407. Dmitry Zelenko, Chinatsu Aone, and Anthony Richardella. 2003. Kernel methods for relation extraction. J. Mach. Learn. Res., 3:1083–1106. Daojian Zeng, Kang Liu, Siwei Lai, Guangyou Zhou, and Jun Zhao. 2014. Relation classification via convolutional deep neural network. In Proceedings of COLING, pages 2335–2344. Meishan Zhang, Yue Zhang, and Guohong Fu. 2017. End-to-end neural relation extraction with global optimization. In Proceedings of EMNLP, pages 1730– 1740. Suncong Zheng, Feng Wang, Hongyun Bao, Yuexing Hao, Peng Zhou, and Bo Xu. 2017. Joint extraction of entities and relations based on a novel tagging scheme. In Proceedings of ACL, pages 1227–1236.
2018
47
Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Long Papers), pages 515–526 Melbourne, Australia, July 15 - 20, 2018. c⃝2018 Association for Computational Linguistics 515 Self-regulation: Employing a Generative Adversarial Network to Improve Event Detection Yu Hong Wenxuan Zhou Jingli Zhang Qiaoming Zhu Guodong Zhou∗ Institute of Artificial Intelligence, Soochow University School of Computer Science and Technology, Soochow University No.1, Shizi ST, Suzhou, China, 215006 {tianxianer, wxchow024, jlzhang05}@gmail.com {qmzhu, gdzhou}@suda.edu.cn Abstract Due to the ability of encoding and mapping semantic information into a highdimensional latent feature space, neural networks have been successfully used for detecting events to a certain extent. However, such a feature space can be easily contaminated by spurious features inherent in event detection. In this paper, we propose a self-regulated learning approach by utilizing a generative adversarial network to generate spurious features. On the basis, we employ a recurrent network to eliminate the fakes. Detailed experiments on the ACE 2005 and TAC-KBP 2015 corpora show that our proposed method is highly effective and adaptable. 1 Introduction Event detection aims to locate the event triggers of specified types in text. Normally, triggers are words or nuggets that evoke the events of interest. Detecting events in an automatic way is challenging, not only because an event can be expressed in different words, but also because a word may express a variety of events in different contexts. In particular, the frequent utilization of common words, ambiguous words and pronouns in event mentions makes them harder to detect: 1) Generality – taken home <Transport> Ambiguity 1 – campaign in Iraq <Attack> Ambiguity 2 – political campaign <Elect> Coreference – Either its bad or good <Marry> A promising solution to this challenge is through semantic understanding. Recently, neural networks have been widely used in this direction (Nguyen and Grishman, 2016; Ghaeini et al., ∗Corresponding author 2016; Feng et al., 2016; Liu et al., 2017b; Chen et al., 2017), which allows semantics of event mentions (trigger plus context) to be encoded in a high-dimensional latent feature space. This facilitates the learning of deep-level semantics. Besides, the use of neural networks not only strengthens current supervised classification of events but alleviates the complexity of feature engineering. However, compared to the earlier study (Liao and Grishman, 2010; Hong et al., 2011; Li et al., 2013), in which the features are carefully designed by experts, the neural network based methods suffer more from spurious features. Here, spurious feature is specified as the latent information which looks like the semantically related information to an event, but actually not (Liu et al., 2017a). For example, in the following sample, the semantic information of the word “prison” most probably enables spurious features to come into being, because the word often co-occurs with the trigger ”taken” to evoke an Arrest-jail event instead of the ground-truth event Transport: 2) Prison authorities have given the nod for Anwar to be taken home later in the afternoon. Trigger: taken. Event Type: Transport It is certain that spurious features often result from the semantically pseudo-related context, and during training, a neural network may mistakenly and unconsciously preserve the memory to produce the fakes. However, it is difficult to determine which words are pseudo-related in a specific case, and when they will “jump out” to mislead the generation of latent features during testing. To address the challenge, we suggest to regulate the learning process with a two-channel selfregulated learning strategy. In the self-regulation process, on one hand, a generative adversarial network is trained to produce the most spurious features, while on the other hand, a neural network 516 澳 ݔ ෙ  ෙ  濵濴濶濾澳瀃瀅瀂瀃濴濺濴瀇濼瀂瀁澳 濵濴濶濾澳瀃瀅瀂瀃濴濺濴瀇濼瀂瀁澳 濣瀅濸濷濼濶瀇濼瀂瀁澳 Figure 1: Self-regulated learning scheme is equipped with a memory suppressor to eliminate the fakes. Detailed experiments on event detection show that our proposed method achieves a substantial performance gain, and is capable of robust domain adaptation. 2 Task Definition The task of event detection is to determine whether there is one or more event triggers in a sentence. Trigger is defined as a token or nugget that best signals the occurrence of an event. If successfully identified, a trigger is required to be assigned a tag to indicate the event type: Input: Either its bad or good Output: its <trigger>; Marry <type> We formalize the event detection problem as a multi-class classification problem. Given a sentence, we classify every token of the sentence into one of the predefined event classes (Doddington et al., 2004) or non-trigger class. 3 Self-Regulated Learning (SELF) SELF is a double-channel model (Figure 1), consisted of a cooperative network (Islam et al., 2003) and a generative adversarial net (GAN) (Goodfellow et al., 2014). A memory suppressor S is used to regulate communication between the channels. 3.1 Cooperative Network In channel 1, the generator G is specified as a multilayer perceptron. It plays a role of a “diligent student”. By a differentiable function G(x, θg) with parameters θg, the generator learns to produce a vector of latent features og that may best characterize the token x, i.e., og = G(x, θg). The discriminator D (called “a lucky professor”) is a single-layer perceptron, implemented as a differentiable function D(og, θd) with parameters θd. Relying on the feature vector og, it attempts to accurately predict the probability of the token x triggering an event for all event classes, i.e., ˆy = D(og, θd), and assigns x to the most probable class c (iff ˆyc > ∀ˆy¯c, ¯c ̸= c). Therefore, G and D cooperate with each other during training, developing the parameters θg and θd with the same goal – to minimize the performance loss L(ˆy, y) in the detection task: θg θd  = argmin L(ˆy, y) (1) where, y denotes the ground-truth probability distribution over event classes, and L indicates the deviation of the prediction from the ground truth. 3.2 Generative Adversarial Network In channel 2, the generator ˇG and discriminator ˇD have the same perceptual structures as G and D. They also perform learning by differentiable functions, respectively ˇG(x, θˇg) and ˇD(oˇg, θ ˇd). A major difference, however, is that they are caught into a cycle of highly adversarial competition. The generator ˇG is a “trouble maker”. It learns to produce spurious features, and utilizes them to contaminate the feature vector oˇg of the token x. Thus ˇG changes a real sample x into a fake z – sometimes successfully, sometimes less so. Using the fakes, ˇG repeatedly instigates the discriminator ˇD to make mistakes. On the other side, ˇD (“a hapless professor”) has to avoid being deceived, and struggles to correctly detect events no matter whether it encounters x or z. In order to outsmart the adversary, ˇG develops the parameters θˇg during training to maximize the performance loss, but on the contrary, ˇD develops the parameters θ ˇd to minimize the loss: θˇg = argmax L(ˆy, y) (2) θ ˇd = argmin L(ˆy, y) (3) Numerous studies have confirmed that the twoplayer minmax game enables both ˇG and ˇD to improve their methods (Goodfellow et al., 2014; Liu and Tuzel, 2016; Huang et al., 2017). 3.3 Regulation with Memory Suppressor Using a memory suppressor, we try to optimize the diligent student G. The goal is to enable G to be as dissimilar as possible to the troublemaker ˇG. The suppressor uses the output oˇg of ˇG as a reference resource which should be full of spurious features. On the basis, it looks over the output og of G, so as to verify whether the features in og are different to those in oˇg. If very different, the suppressor allows G to preserve the memory (viz., θg in G(x, θg)), otherwise update. In other word, 517 for G, the suppressor forcibly erases the memory which may result in the generation of spurious features. We call this the self-regulation. Self-regulation is performed for the whole sentence which is fed into G and ˇG. Assume that Og is a matrix, constituted with a series of feature vectors, i.e., the vectors generated by G for all the tokens in an input sentence (og ∈Og), while Oˇg is another feature matrix, generated by ˇG for the tokens (oˇg ∈Oˇg). Thus, we utilize the matrix approximation between Og and Oˇg for measuring the loss of self-regulation learning Ldiff. The higher the similarity, the greater the loss. During training, the generator G is required to develop the parameters θg to minimize the loss: θg = argmin Ldiff(og, oˇg) (4) We present in detail the matrix approximate calculation in section 4.4, where the squared Frobenius norm (Bousmalis et al., 2016) is used. 3.4 Learning to Predict We incorporate the cooperative network with the GAN, and enhance their learning by joint training. In the 4-member incorporation, i.e., {G, ˇG, D and ˇD}, the primary beneficiary is the lucky professor D. It can benefit from both the cooperation in channel 1 and the competition in channel 2. The latent features it uses are well-produced by G, and decontaminated by eliminating possible fakes like those made by ˇG. Therefore, in experiments, we choose to output the prediction results of D. In this paper, we use two recurrent neural networks (RNN) (Sutskever et al., 2014; Chung et al., 2014) of the same structure as the generators. And both the discriminators are implemented as a fullyconnected layer followed by a softmax layer. 4 Recurrent Models for SELF RNN with long short-term memory (abbr., LSTM) is adopted due to the superior performance in a variety of NLP tasks (Liu et al., 2016a; Lin et al., 2017; Liu et al., 2017a). Furthermore, the bidirectional LSTM (Bi-LSTM) architecture (Schuster and Paliwal, 1997; Ghaeini et al., 2016; Feng et al., 2016) is strictly followed. This architecture enables modeling of the semantics of a token with both the preceding and following contexts. 4.1 LSTM based Generator Given a sentence, we follow Chen et al (2015) to take all the tokens of the whole sentence as the input. Before feeding the tokens into the network, we transform each of them into a real-valued vector x ∈Re. The vector is formed by concatenating a word embedding with an entity type embedding. • Word Embedding: It is a fixed-dimensional real-valued vector which represents the hidden semantic properties of a token (Collobert and Weston, 2008; Turian et al., 2010). • Entity Type Embedding: It is specially used to characterize the entity type associated with a token. The BIO2 tagging scheme (Wang and Manning, 2013; Huang et al., 2015) is employed for assigning a type label to each token in the sentence. For the input token xt at the current time step t, the LSTM generates the latent feature vector ot ∈ Rd by the previous memory. Meanwhile, the token is used to update the current memory. The LSTM possesses a long-term memory unit ct ∈Rd and short-term ct ∈Rd. In addition, it is equipped with the input gate it, forgetting gate ft and a hidden state ht, which are assembled together to promote the use of memory, as well as dynamic memory updating. Similarly, they are defined as a d-dimensional vector in Rd. Thus LSTM works in the following way: ⎡ ⎢⎢⎣ ot ct it ft ⎤ ⎥⎥⎦= ⎡ ⎢⎢⎣ σ tanh σ σ ⎤ ⎥⎥⎦ W  xt ht−1  + b (5) ht = ot ⊙tanh(ct) (6) ct = ct ⊙it + ct−1 ⊙ft (7) where W ∈R4d×(d+e) and b ∈R4d are parameters of affine transformation; σ refers to the logistic sigmoid function and ⊙denotes element-wise multiplication. The output functions of both the generators in SELF, i.e., G and ˇG, can be boiled down to the output gate ot ∈Rd of the LSTM cell: ot = LSTM(xt; θ) (8) where, the function LSTM (·;·) is a shorthand for Eq. (5-7) and θ represents all the parameters of LSTM. For both G and ˇG, θ are initialized with the same values in experiments. But due to the distinct training goals of G and ˇG (diligence or makingtrouble), the values of the parameters in the two 518 cases will change to be very different after training. Therefore, we have og,t = LSTM(xt, θg,t) and oˇg,t = LSTM(xt, θˇg,t). 4.2 Fully-connected Layer for Discrimination Depending on the feature vectors og,t and oˇg,t, the two discriminators D and ˇD predict the probability of the token xt triggering an event for all event classes. As usual, they compute the probability distribution over classes using a fully connected layer followed by a softmax layer: ˆy = softmax( ˆW · ot + ˆb) (9) where ˇy is a C-dimensional vector, in which each dimension indicates the prediction for a class; C is the class number; ˆW ∈Rd is the weight which needs to be learned; ˆb is a bias term. It is noteworthy that the discriminator D and ˇD don’t share the weight and the bias. It means that, for the same token xt, they may make markedly different predictions: ˆyg,t = softmax( ˆWg ·og,t + ˆbg) and ˆyˇg,t = softmax( ˆWˇg · oˇg,t + ˆbˇg). 4.3 Classification Loss We specify the loss as the cross-entropy between the predicted and ground-truth probability distributions over classes. Given a batch of training data that includes N samples (xi, yi), we calculate the losses the discriminators cause as below: L(ˆyg, y) = − N i=1 C j=1 yj i log(ˆyj g,i) (10) L(ˆyˇg, y) = − N i=1 C j=1 yj i log(ˆyj ˇg,i) (11) where yi is a C-dimensional one-hot vector. The value of its j-th dimension is set to be 1 only if the token xi triggers an event of the j-th class, otherwise 0. Both ˆyg,i and ˆyˇg,i are the predicted probability distributions over the C classes for xi. 4.4 Loss of Self-regulated Learning Assume that Og is a matrix, consisted of the feature vectors output by G for all the tokens in a sentence, i.e., og,t ∈Og, and Oˇg is that provided by ˇG, i.e., oˇg,t ∈Oˇg, thus we compute the similarity between Og and Oˇg and use it as the measure of self-regulation loss Ldiff(Og, Oˇg): Ldiff(Og, Oˇg) = ∥OgO⊤ ˇg ∥ 2 F (12) where, ∥· ∥ 2 F denotes the squared Frobenius norm (Bousmalis et al., 2016), which is used to calculate the similarity between matrices. It is noteworthy that the feature vectors a generator outputs are required to serve as the rows in the matrix, deployed in a top-down manner and arranged in the order in which they are generated. For example, the feature vector og,t the generator G outputs at the time t needs to be placed in the t-th row of the matrix Og. At the very beginning of the measurement, the similarity between every feature vector in Og and that in O ˇG is first calculated by the matrix-matrix multiplication OgO⊤ ˇg : ⎛ ⎜ ⎜ ⎜ ⎜ ⎝ og,1o⊤ ˇg,1 ... og,1o⊤ ˇg,t ... og,1o⊤ ˇg,l ... ... ... ... ... og,1o⊤ ˇg,t ... og,to⊤ ˇg,t ... og,to⊤ ˇg,l ... ... ... ... ... og,1o⊤ ˇg,l ... og,lo⊤ ˇg,t ... og,lo⊤ ˇg,l ⎞ ⎟ ⎟ ⎟ ⎟ ⎠ where, the symbol ⊤denotes the transpose operation; l is the sentence length which is defined to be uniform for all sentences (l=80), and if it is larger than the real ones, padding is used; og,ioˇg,j denotes the scalar product between the feature vectors og,i and oˇg,j. Let Am×n be a matrix, the squared Frobenius norm of Am×n (i.e., ∥Am×n∥ 2 F ) is defined as: ∥Am×n∥ 2 F = ⎛ ⎝ m i=1 n j=1 |aij|2 ⎞ ⎠ 1 2 (13) where, aij denotes the j-th element in the i-th row of Am×n. Thus, if we let Am×n be the matrix produced by the matrix-matrix multiplication OgO⊤ ˇg , the self-regulation loss Ldiff(Og, Oˇg) can be eventually obtained by: Ldiff(Og, Oˇg) = ⎛ ⎝ l i=1 l j=1 |og,ioˇg,j|2 ⎞ ⎠ 1 2 (14) For a batch of training data that includes N′ sentences, the global self-regulation loss is specified as the sum of the losses for all the sentences: LSELF = N′ i=1 Ldiff(Og, Oˇg). 4.5 Training We train the cooperative network in SELF to minimize the classification loss L(ˆyg, y) and the loss 519 of self-regulated learning LSELF : θg = argmin (Lˆyg, y) (15) θd = argmin (L(ˆyg, y) + λ · LSELF ) (16) where λ is a hyper-parameter, which is used to harmonize the two losses. The min-max game is utilized for training the adversarial net in SELF: θˇg = argmax L(ˆyˇg, y); θ ˇd = argmin L(ˆyˇg, y). All the networks in SELF are trained jointly using the same batches of samples. They are trained via stochastic gradient descent (Nguyen and Grishman, 2015) with shuffled mini-batches and the AdaDelta update rule (Zeiler, 2012). The gradients are computed using back propagation. And regularization is implemented by a dropout (Hinton et al., 2012). 5 Experimentation 5.1 Resource and Experimental Datasets We test the presented model on the ACE 2005 corpus. The corpus is annotated with single-token event triggers and has 33 predefined event types (Doddington et al., 2004; Ahn, 2006), along with one class “None” for the non-trigger tokens, constitutes a 34-class classification problem. For comparison purpose, we use the corpus in the traditional way, randomly selecting 30 articles in English from different genres as the development set, and utilizing a separate set of 40 English newswire articles as the test set. The remaining 529 English articles are used as the training set. 5.2 Hyperparameter Settings The word embeddings are initialized with the 300dimensional real-valued vectors. We follow Chen et al (2015) and Feng et al (2016) to pre-train the embeddings over NYT corpus using Mikolov et al (2013)’s skip-gram tool. The entity type embeddings, as usual (Nguyen et al., 2016; Feng et al., 2016; Liu et al., 2017b), are specified as the 50dimensional real-valued vectors. They are initialized with the 32-bit floating-point values, which are all randomly sampled from the uniformly distributed values in [-1, 1]1. We initialize other adjustable parameters of the back-propagation algorithm by randomly sampling in [-0.1, 0.1]. We follow Feng et al (2016) to set the dropout rate as 0.2 and the mini-batch size as 10. We 1https://www.tensorflow.org/api docs/python/tf/random uniform tune the initialized parameters mentioned above, harmonic coefficient λ, learning rate and the L2 norm on the development set. Grid search (Liu et al., 2017a) is used to seek for the optimal parameters. Eventually, we take the coefficient λ of 0.1+3, learning rate of 0.3 and L2 norm of 0. The source code of SELF2 to reproduce the experiments has been made publicly available. 5.3 Compared Systems The state-of-the-art models proposed in the past decade are compared with ours. By taking learning framework as the criterion, we divide the models into three classes: Minimally supervised approach: is Peng et al (2016)’s MSEP-EMD. Feature based approaches: primarily including Liao and Grishman (2010)’s Cross-Event inference model, which is based on the max-entropy classification and embeds the document-level confident information in the feature space; Hong et al (2011)’s Cross-Entity inference model, in which existential backgrounds of name entities are employed as the additional discriminant features; and Li et al (2013)’s Joint model, a sophisticated predictor frequently ranked among the top 3 in recent TAC-KBP evaluations for nugget and coreference detection (Hong et al., 2014, 2015; Yu et al., 2016). It is based on structured perceptron and combines the local and global features. Neural network based approaches: including the convolutional neural network (CNN) (Nguyen and Grishman, 2015), the non-consecutive Ngrams based CNN (NC-CNN) (Nguyen and Grishman, 2016) and the CNN that is assembled with a dynamic multi-pooling layer (DM-CNN) (Chen et al., 2015). Others include Ghaeini et al (2016)’s forward-backward recurrent neural network (FBRNN) which is developed using gated recurrent units (GRU), Nguyen et al (2016)’s bidirectional RNN (Bi-RNN) and Feng et al (2016)’s Hybrid networks that consist of a Bi-LSTM and a CNN. Besides, we compare our model with Liu et al (2016b)’s artificial neural networks (ANNs), Liu et al (2017b)’s attention-based ANN (ANN-S2) and Chen et al (2017)’s DM-CNN∗. The models recently have become popular because, although simple in structure, they are very analytic by learning from richer event examples, such as those in 2https://github.com/JoeZhouWenxuan/Self-regulationEmploying-a-Generative-Adversarial-Network-to-ImproveEvent-Detection/tree/master 520 Method P (%) R (%) F (%) Joint (Local+Global) 76.9 65.0 70.4 MSEP-EMD 75.6 69.8 72.6 DM-CNN 80.4 67.7 73.5 DM-CNN∗ 79.7 69.6 74.3 Bi-RNN 68.5 75.7 71.9 Hybrid: Bi-LSTM+CNN 80.8 71.5 75.9 SELF: Bi-LSTM+GAN 75.3 78.8 77.0 Table 1: Trigger identification performance FrameNet (FN) and Wikipeida (Wiki). 5.4 Experimental Results We evaluate our model using Precision (P), Recall (R) and F-score (F). To facilitate the comparison, we review the best performance of the competitors, which has been evaluated using the same metrics, and publicly reported earlier. Trigger identification Table 1 shows the trigger identification performance. It can be observed that SELF outperforms other models, with a performance gain of no less than 1.1% F-score. Frankly, the performance mainly benefits from the higher recall (78.8%). But in fact the relatively comparable precision (75.3%) to the recall reinforces the advantages. By contrast, although most of the compared models achieve much higher precision over SELF, they suffer greatly from the substantial gaps between precision and recall. The advantage is offset by the greater loss of recall. GAN plays an important role in optimizing BiRNN. This is proven by the fact that SELF (BiLSTM+GAN) outperforms Nguyen et al (2016)’s Bi-RNN. To be honest, the models use two different kinds of recurrent units. Bi-RNN uses GRUs, but SELF uses the units that possess LSTM. Nevertheless, GRU has been experimentally proven to be comparable in performance to LSTM (Chung et al., 2014; Jozefowicz et al., 2015). This allows a fair comparison between Bi-RNN and SELF. Event classification Table 2 shows the performance of multi-class classification. SELF achieves nearly the same F-score as Feng et al (2016)’s Hybrid, and outperforms the others. More importantly, SELF is the only one which obtains a performance higher than 70% for both precision and recall. Besides, by analyzing the experimental results, we have identified the following regularities: Methods P (%) R (%) F (%) MSEP-EMD 70.4 65.0 67.6 Cross-Event 68.8 68.9 68.8 Cross-Entity 72.9 64.3 68.3 Joint (Local+Global) 73.7 62.3 67.5 CNN 71.8 66.4 69.0 DM-CNN 75.6 63.6 69.1 NC-CNN 71.3 FB-RNN (GRU) 66.8 68.0 67.4 Bi-RNN (GRU) 66.0 73.0 69.3 ANNs (ACE+FN) 77.6 65.2 70.7 DM-CNN∗(ACE+Wiki) 75.7 66.0 70.5 ANN-S2 (ACE+FN) 76.8 67.5 71.9 Hybrid: Bi-LSTM+CNN 84.6 64.9 73.4 SELF: Bi-LSTM+GAN 71.3 74.7 73.0 Table 2: Detection performance (trigger identification plus multi-class classification) • Similar to the pattern classifiers that are based on hand-designed features, the CNN models enable higher precision to be obtained. However the recall is lower. • The RNN models contribute to achieving a higher recall. However the precision is lower. • Expansion of the training data set helps to increase the precision. Let us turn to the structurally more complicated models, SELF and Hybrid. SELF inherits the merits of the RNN models, classifying the events with higher recall. Besides, by the utilization of GAN, SELF has evolved from the traditional learning strategies, being capable of learning from GAN and getting rid of the mistakenly generated spurious features. So that it outperforms other RNNs, with improvements of no less than 4.5% precision and 1.7% recall. Hybrid is elaborately established by assembling a RNN with a CNN. It models an event from two perspectives: language generation and pragmatics. The former is deeply learned by using the continuous states hidden in the recurrent units, while the later the convolutional features. Multi-angled cognition enables Hybrid to be more precise. However it is built using a single-channel architecture, concatenating the RNN and the CNN. This results in twofold accumulation of feature information, causing a serious overfitting problem. Therefore, Hybrid is localized to much higher precision but substantially lower recall. Overfitting results in enlargement of the gap between precision and recall when the task changes to be more difficult. For Hybrid, as illustrated in 521 MSEP-EMD Joint DM-CNN DM-CNN* Hybrid Bi-RNN SELF 60 65 70 75 80 85 P R gap=5.8% gap=5.4% 60 65 70 75 80 85 P R 60 65 70 75 80 85 P R 60 65 70 75 80 85 P R 60 65 70 75 80 85 P R 60 65 70 75 80 85 P R 60 65 70 75 80 85 P R gap=11.9% gap=11.4% gap=12.7% gap=12% gap=10.1% gap=9.7% gap=9.3% gap=19.7% gap=7% gap=7% gap=3.5% gap=3.4% Trigger Trigger+ Type Trigger Trigger+ Type Trigger Trigger+ Type Trigger Trigger+ Type Trigger Trigger+ Type Trigger Trigger+ Type Trigger Trigger+ Type (Bi-LSTM+GAN) Min-supervision Feature engineering CNN-based RNN-based Hybrid networks (Bi-LSTM+CNN) (GRU) Figure 2: Gaps between precision and recall in the tasks of trigger identification and event classification Methods Embedding Types Training Data ANNs word ACE+FN ANN-S2 word, NE-type ACE+FN DM-CNN∗ word, PSN ACE+Wiki CNN word, NE-type, PSN ACE NC-CNN word, NE-type, PSN ACE Bi-RNN word, NE-type, DEP ACE Hybrid word, NE-type, PSN ACE DM-CNN word, PSN ACE FB-RNN word, branch ACE SELF word, NE-type ACE Table 3: Embedding types and training data (DEP: Dependency grammar; PSN: Position) Figure 2, the gap becomes much wider (from 9% to 19.7%) when the binary classification task (trigger identification) is shifted to multi-class classification (event detection). By contrast, other work shows a nearly constant gap. In particular, SELF yields a minimum gap in each task, which changes negligibly from 3.5% to 3.4%. It may be added that, similar to DM-CNN and FB-RNN, SELF is cost-effective. Compared to other models (Table 3), it either uses less training data, or is only required to learn two kinds of embeddings, such as that of words and entity types. 5.5 Discussion: Adaptation, Robustness and Effectiveness Domain adaptation is a key criteria for evaluating the utility of a model in practical application. A model can be thought of being adaptable only if it works well for the unlabeled data in the target domain when trained on the source domain (Blitzer et al., 2006; Plank and Moschitti, 2013). We perform two groups of domain adaptation experiments, respectively, using the ACE 2005 corpus and the corpus for TAC-KBP 2015 event nugget track (Ellis et al., 2015). The ACE corpus consists of 6 domains: broadcast conversation (bc), broadcast news (bn), telephone conversation (cts), newswire (nw), usenet (un) and web blogs (wl). Following the common practice of adaptation research on this data (Nguyen and Grishman, 2014, 2015; Plank and Moschitti, 2013), we take the union of bn and nw as the source domain and bc, cts and wl as three different target domains. We randomly select half of the instances from bc to constitute the development set. The TAC-KBP corpus consists of 2 domains: newswire (NW) and discussion forum (DF). We follow Peng et al (2016) to use one of NW and DF in alternation as the source domain, while the other the target domain. We randomly select a proportion (20%) of the instances from the target domain to constitute the development set. We compare with Joint, CNN, MSEP-EMD, SSED (Sammons et al., 2015) and Hybrid. All the models except Hybrid have been reported for the performance assessment of domain adaptation. In this section, we only cite the best performance they obtained. We reproduce Hybrid by using the source code given by authors. To ensure a fair comparison, we perform 3 runs, in each of which, both Hybrid and SELF were redeveloped on a new development set. What we report herein is the average performance they obtained over the 3 runs. Adaptation Performance We show the adaptation performance on the ACE corpus in Tables 4 and that on TAC-KBP in Table 5. It can be observed that SELF outperforms other models in the out-of-domain scenarios. Besides, when testing is performed on the outof-domain ACE corpus, the performance degradation of SELF is not much larger than that of CNN and Hybrid. When the out-of-domain TAC-KBP corpus is used, the performance of SELF is impaired much less severely than SSED and Hybrid. 522 Methods In-domain (bn+nw) Out-of-domain (bc) Out-of-domain (cts) Out-of-domain (wl) P(%) R(%) F(%) P(%) R(%) F(%) Loss P(%) R(%) F(%) Loss P(%) R(%) F(%) Loss Joint 72.9 63.2 67.7 68.8 57.5 62.6 ↓5.1 64.5 52.3 57.7 ↓10.0 56.4 38.5 45.7 ↓22.0 CNN 69.2 67.0 68.0 70.2 65.2 67.6 ↓0.4 68.3 58.2 62.8 ↓5.2 54.8 42.0 47.5 ↓20.5 Hybrid 68.8 54.8 61.0 64.7 58.8 61.6 ↑0.6 59.9 50.6 54.9 ↓6.1 54.0 37.9 44.5 ↓16.5 SELF 73.8 65.7 69.5 70.0 67.2 68.9 ↓0.6 68.3 60.2 63.3 ↓6.2 58.0 44.0 50.0 ↓19.5 Table 4: Experimental results of domain adaptation on the ACE 2005 corpus Methods In-domain (NW) Out-of-domain (DF) In-domain (DF) Out-of-domain (NW) P(%) R(%) F(%) P(%) R(%) F(%) Loss P(%) R(%) F(%) P(%) R(%) F(%) Loss MSEP-EMD NA NA 58.5 NA NA 52.8 ↓5.7 NA NA 57.9 NA NA 55.1 ↓2.8 SSED NA NA 63.7 NA NA 52.3 ↓11.4 NA NA 62.6 NA NA 54.8 ↓7.8 Hybrid 72.6 55.4 62.9 62.3 39.2 48.1 ↓14.8 66.0 42.6 51.8 59.1 48.4 53.3 ↑1.5 SELF 67.6 60.6 63.9 69.0 58.7 56.7 ↓7.2 70.5 48.3 57.3 69.3 51.7 59.2 ↑1.9 Table 5: Experimental results of domain adaptation on the TAC-KBP 2015 corpus (NA: not released) More importantly, the adaptability of SELF is relatively close to that of MSEP-EMD. Considering that MSEP-EMD is stable due to using minimal supervision (Peng et al., 2016), we suggest the fully trained networks in SELF may not appear to be extremely inflexible, but on the contrary, they should be transferable for use (Ge et al., 2016). Robustness in Resource-Poor Settings There are two resource-poor conditions discussed in this section, including lack of in-domain training data and that of out-domain. Hybrid and SELF are brought into the discussion. For the former (in-domain) case, we went over the numbers of samples used for training in the adaptation experiments, which are shown in Table 6. It can be observed that there is a minimum number of training samples (triggers plus tokens) contained in the domain of NW. By contrast, the domain of bn+nw contains the smallest number of positive samples (triggers) though an overwhelming number of negative samples (general tokens). Under such conditions, Hybrid performs better in the domain of NW compared to bn+nw and DF in the three in-domain adaptation experiments (see the column labelled as “In-domain bn+nw” in Table 4 as well as “In-domain NW” and “In-domain DF” in Table 5). It illustrates that Hybrid unnecessarily relies on a tremendous number of training samples to ensure the robustness. But SELF does. It needs far more negative samples than Hybrid because of the following reasons: • It relies on the use of spurious features to implement self-regulation during training. Domain Training Testing trigger token trigger token bn+nw 1,721 74,179 343 16,336 NW 2,098 31,014 2,813 55,459 DF 4,106 10,9275 1,773 43,877 Table 6: Data distribution in the source domains • For a positive sample, the concerned spurious features (if have) most probably hide in some negative samples. • It’s impossible to be aware of such negative samples. Therefore, taking into consideration as many negative samples as possible may help to increase the probability that the spurious features will be discovered. This is demonstrated by the fact that SELF obtains better performance in the domain of bn+nw but not NW (see the column labeled as “Training” in Table 6 and “In-domain” in Table 4 and 5). It may be added that SELF performs worse in DF although there are more negative samples used for training (see Table 6). Taking a glance at the number of positive samples, one may find that it is approximately 2.4 times more than that in bn+nw. But the number of negative samples in DF is only 1.5 times more than that in bn+nw. It implies that, if there are more positive samples used for training, SELF needs to consume proportionally more negative samples for self-regulation. Otherwise, the performance will degrade. For the out-domain case, ideally, both Hybrid and SELF encounter the problem that there is lack of target domain data available for training. In this case, SELF displays less performance degradation 523 Event mentions Type And it still does Die We had no part in it Arrest-Jail Nobody questions if this is right or ... Attack And that is what ha- what is happening End-Position Oh, yeah, it wasn’t perfect Marry Table 7: Examples of pronouns that act as a trigger (7.2%) than Hybrid (14.8%) when NW is used for training. Considering that NW contains the minimum number of samples, we would like to believe that SELF is more robust than Hybrid for crossdomain event detection in a resource-poor setting. Recall and Missing SELF is able to accurately recall the events whose occurrence is triggered by ambiguous words, such as “fine”, “charge”, “campaign”, etc. These ambiguous words easily causes confusion. For example, “campaign” may trigger an Elect event or Attack in the ACE corpus. More importantly, SELF fishes out the common words which serve as a trigger, although they are not closely related to any kind of events, such as “take”, “try”, “acquire”, “become”, “create”, etc. In general, it is very difficult to accurately recall such triggers because their meanings are not concrete enough, and the contexts may be full of kinds of noises (see example 2 in pg. 1). We observe that Bi-RNN and Hybrid seldom pick them up. However, SELF fails to recall the pronouns that act as a trigger. This is because they occur in spoken language much more frequently than they occur in written language. The lack of narrative content makes it difficult to learn the relationship between the pronouns and the events. Some real examples collected from ACE are shown in Table 7. 6 Related Work Event detection is an important subtask of event extraction (Doddington et al., 2004; Ahn, 2006). The research can be traced back to the pattern based approach (Grishman et al., 2005). Encouraged by the high accuracy and the benefit of easyto-use, researchers have made great efforts to extract discriminative patterns. Cao et al (2015a; 2015b) use dependency regularization and active leaning to generalize and expand the patterns. In the earlier study, another trend is to explore the features that best characterize each event class, so as to facilitate supervised classification. A variety of strategies have emerged for converting classification clues into feature vectors (Ahn, 2006; Patwardhan and Riloff, 2009; Liao and Grishman, 2010; Hong et al., 2011; Li et al., 2013, 2014; Wei et al., 2017). Benefiting from the general modeling framework, the methods enable the fusion of multiple features, and more importantly, they are flexible to use by feature selection. But considerable expertise is required for feature engineering. Recently, the use of neural networks for event detection has become a promising line of research. The closely related work has been presented in section 5.3. The primary advantages of neural networks have been demonstrated in the work, such as performance enhancement, self-learning capability and robustness. The generative adversarial network (Goodfellow et al., 2014) has emerged as an increasingly popular approach for text processing (Zhang et al., 2016; Lamb et al., 2016; Yu et al., 2017). Liu et al (2017a) use the adversarial multi-task learning for text classification. We follow the work to create spurious features, but use them to regulate the self-learning process in a single-task situation. 7 Conclusion We use a self-regulated learning approach to improve event detection. In the learning process, the adversarial and cooperative models are utilized in decontaminating the latent feature space. In this study, the performance of the discriminator in the adversarial network is left to be evaluated. Most probably, the discriminator also performs well because it is gradually enhanced by fierce competition. Considering this possibility, we suggest to drive the two discriminators in our self-regulation framework to cooperate with each other. Besides, the global features extracted in Li et al (2013)’s work are potentially useful for detecting the event instances referred by pronouns, although involve noises. Therefore, in the future, we will encode the global information by neural networks and use the self-regulation strategy to reduce the negative influence of noises. Acknowledgments We thank Xiaocheng Feng and his colleagues who shared the source code of Hybrid with us. This work was supported by the national Natural Science Foundation of China (NSFC) via Grant Nos. 61525205, 61751206, 61672368. 524 References David Ahn. 2006. The stages of event extraction. In Proceedings of the Workshop on Annotating and Reasoning about Time and Events, Association for Computational Linguistics (ACL’06). Association for Computational Linguistics, pages 1–8. http://www.aclweb.org/anthology/W06-0901. John Blitzer, Ryan McDonald, and Fernando Pereira. 2006. Domain adaptation with structural correspondence learning. In Proceedings of the 2006 conference on Empirical Methods in Natural Language Processing (EMNLP’06). Association for Computational Linguistics, pages 120–128. http://www.aclweb.org/anthology/W06-1615. Konstantinos Bousmalis, George Trigeorgis, Nathan Silberman, Dilip Krishnan, and Dumitru Erhan. 2016. Domain separation networks. In Advances in Neural Information Processing Systems. pages 343– 351. Kai Cao, Xiang Li, Miao Fan, and Ralph Grishman. 2015a. Improving event detection with active learning. In Proceedings of the International Conference Recent Advances in Natural Language Processing (RANLP’15). pages 72–77. http://www.aclweb.org/anthology/R15-1010. Kai Cao, Xiang Li, and Ralph Grishman. 2015b. Improving event detection with dependency regularization. In Proceedings of the International Conference Recent Advances in Natural Language Processing (RANLP’15). pages 78–83. http://www.aclweb.org/anthology/R15-1011. Yubo Chen, Shulin Liu, Xiang Zhang, Kang Liu, and Jun Zhao. 2017. Automatically labeled data generation for large scale event extraction. In Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (ACL’17). volume 1, pages 409–419. https://doi.org/10.18653/v1/P171038. Yubo Chen, Liheng Xu, Kang Liu, Daojian Zeng, Jun Zhao, et al. 2015. Event extraction via dynamic multi-pooling convolutional neural networks. In Proceedings of the 53th Annual Meeting of the Association for Computational Linguistics (ACL’15). pages 167–176. https://doi.org/10.3115/v1/P151017. Junyoung Chung, Caglar Gulcehre, KyungHyun Cho, and Yoshua Bengio. 2014. Empirical evaluation of gated recurrent neural networks on sequence modeling. arXiv preprint arXiv:1412.3555 . Ronan Collobert and Jason Weston. 2008. A unified architecture for natural language processing: Deep neural networks with multitask learning. In Proceedings of the 25th international conference on Machine learning (ICML’08). ACM, pages 160– 167. George R Doddington, Alexis Mitchell, Mark A Przybocki, Lance A Ramshaw, Stephanie Strassel, and Ralph M Weischedel. 2004. The automatic content extraction (ACE) program-tasks, data, and evaluation. In LREC. volume 2, pages 1–4. http://www.aclweb.org/anthology/L04-1011. Joe Ellis, Jeremy Getman, Dana Fore, Neil Kuster, Zhiyi Song, Ann Bies, and Stephanie Strassel. 2015. Overview of linguistic resources for the tac kbp 2015 evaluations: Methodologies and results. In Proceedings of TAC KBP 2015 Workshop, National Institute of Standards and Technology (TAC’15). pages 16– 17. Xiaocheng Feng, Lifu Huang, Duyu Tang, Heng Ji, Bing Qin, and Ting Liu. 2016. A languageindependent neural network for event detection. In Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics (ACL’16). volume 2, pages 66–71. https://doi.org/10.18653/v1/P16-2011. Tao Ge, Lei Cui, Baobao Chang, Zhifang Sui, and Ming Zhou. 2016. Event detection with burst information networks. In Proceedings of COLING 2016, the 26th International Conference on Computational Linguistics: Technical Papers. pages 3276–3286. Reza Ghaeini, Xiaoli Fern, Liang Huang, and Prasad Tadepalli. 2016. Event nugget detection with forward-backward recurrent neural networks. In Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics (ACL’16). volume 2, pages 369–373. https://doi.org/10.18653/v1/P16-2060. Ian Goodfellow, Jean Pouget-Abadie, Mehdi Mirza, Bing Xu, David Warde-Farley, Sherjil Ozair, Aaron Courville, and Yoshua Bengio. 2014. Generative adversarial nets. In Advances in neural information processing systems. pages 2672–2680. Ralph Grishman, David Westbrook, and Adam Meyers. 2005. Nyu’s English ACE 2005 system description. ACE’05 . Geoffrey E Hinton, Nitish Srivastava, Alex Krizhevsky, Ilya Sutskever, and Ruslan R Salakhutdinov. 2012. Improving neural networks by preventing coadaptation of feature detectors. arXiv preprint arXiv:1207.0580 . Yu Hong, Di Lu, Dian Yu, Xiaoman Pan, Xiaobin Wang, Yadong Chen, Lifu Huang, and Heng Ji. 2015. RPI BLENDER TAC-KBP2015 system description. In Proceedings of Text Analysis Conference (TAC’15). Yu Hong, Xiaobin Wang, Yadong Chen, Jian Wang, Tongtao Zhang, Jin Zheng, Dian Yu, Qi Li, Boliang Zhang, Han Wang, et al. 2014. RPI BLENDER TAC-KBP2014 knowledge base population system. In Proceedings of Text Analysis Conference (TAC’14). 525 Yu Hong, Jianfeng Zhang, Bin Ma, Jianmin Yao, Guodong Zhou, and Qiaoming Zhu. 2011. Using cross-entity inference to improve event extraction. In Proceedings of the 49th Annual Meeting of the Association for Computational Linguistics: Human Language Technologies (ACL-HLT’11). Association for Computational Linguistics, pages 1127–1136. http://www.aclweb.org/anthology/P11-1113. Xun Huang, Yixuan Li, Omid Poursaeed, John Hopcroft, and Serge Belongie. 2017. Stacked generative adversarial networks. In IEEE Conference on Computer Vision and Pattern Recognition (CVPR). volume 2, page 4. Zhiheng Huang, Wei Xu, and Kai Yu. 2015. Bidirectional LSTM-CRF models for sequence tagging. arXiv preprint arXiv:1508.01991 . Md M Islam, Xin Yao, and Kazuyuki Murase. 2003. A constructive algorithm for training cooperative neural network ensembles. IEEE Transactions on neural networks 14(4):820–834. Rafal Jozefowicz, Wojciech Zaremba, and Ilya Sutskever. 2015. An empirical exploration of recurrent network architectures. In Proceedings of the 32nd International Conference on Machine Learning (ICML’15). pages 2342–2350. Alex M Lamb, Anirudh Goyal ALIAS PARTH GOYAL, Ying Zhang, Saizheng Zhang, Aaron C Courville, and Yoshua Bengio. 2016. Professor forcing: A new algorithm for training recurrent networks. In Advances In Neural Information Processing Systems. pages 4601–4609. Qi Li, Heng Ji, and Liang Huang. 2013. Joint event extraction via structured prediction with global features. In Proceedings of the 51th Annual Meeting of the Association for Computational Linguistics (ACL’13). pages 73–82. http://www.aclweb.org/anthology/P13-1008. Qi Li, Heng Ji, HONG Yu, and Sujian Li. 2014. Constructing information networks using one single model. In Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing (EMNLP’14). pages 1846–1851. https://doi.org/10.3115/v1/D14-1198. Shasha Liao and Ralph Grishman. 2010. Using document level cross-event inference to improve event extraction. In Proceedings of the 48th Annual Meeting of the Association for Computational Linguistics (ACL’10). Association for Computational Linguistics, pages 789–797. http://www.aclweb.org/anthology/P10-1081. Zhouhan Lin, Minwei Feng, Cicero Nogueira dos Santos, Mo Yu, Bing Xiang, Bowen Zhou, and Yoshua Bengio. 2017. A structured self-attentive sentence embedding. arXiv preprint arXiv:1703.03130 . Ming-Yu Liu and Oncel Tuzel. 2016. Coupled generative adversarial networks. In Advances in neural information processing systems. pages 469–477. Pengfei Liu, Xipeng Qiu, Jifan Chen, and Xuanjing Huang. 2016a. Deep fusion LSTMs for text semantic matching. In Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics (ACL’16). volume 1, pages 1034–1043. https://doi.org/10.18653/v1/P16-1098. Pengfei Liu, Xipeng Qiu, and Xuanjing Huang. 2017a. Adversarial multi-task learning for text classification. arXiv preprint arXiv:1704.05742 https://doi.org/10.18653/v1/P17-1001. Shulin Liu, Yubo Chen, Shizhu He, Kang Liu, and Jun Zhao. 2016b. Leveraging framenet to improve automatic event detection. In Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics (ACL’16). https://doi.org/10.18653/v1/P16-1201. Shulin Liu, Yubo Chen, Kang Liu, and Jun Zhao. 2017b. Exploiting argument information to improve event detection via supervised attention mechanisms 1:1789–1797. https://doi.org/10.18653/v1/P171164. Tomas Mikolov, Wen-tau Yih, and Geoffrey Zweig. 2013. Linguistic regularities in continuous space word representations. In Proceedings of the 2013 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies (NAACL’13). volume 13, pages 746–751. http://www.aclweb.org/anthology/N13-1090. Thien Huu Nguyen, Kyunghyun Cho, and Ralph Grishman. 2016. Joint event extraction via recurrent neural networks. In Proceedings of the 2016 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies (NAACL’16). pages 300–309. https://doi.org/10.18653/v1/N16-1034. Thien Huu Nguyen and Ralph Grishman. 2014. Employing word representations and regularization for domain adaptation of relation extraction. In Proceedings of the 52th Annual Meeting of the Association for Computational Linguistics (ACL’14). pages 68–74. https://doi.org/10.3115/v1/P14-2012. Thien Huu Nguyen and Ralph Grishman. 2015. Event detection and domain adaptation with convolutional neural networks. In Proceedings of the 53th Annual Meeting of the Association for Computational Linguistics (ACL’15). pages 365–371. https://doi.org/10.3115/v1/P15-2060. Thien Huu Nguyen and Ralph Grishman. 2016. Modeling skip-grams for event detection with convolutional neural networks. In Proceedings of the 2016 Conference on Empirical Methods in Natural Language Processing (EMNLP’16). pages 886–891. https://doi.org/10.18653/v1/D16-1085. Siddharth Patwardhan and Ellen Riloff. 2009. A unified model of phrasal and sentential evidence 526 for information extraction. In Proceedings of the 2009 Conference on Empirical Methods in Natural Language Processing (EMNLP’09). Association for Computational Linguistics, pages 151–160. http://www.aclweb.org/anthology/D09-1016. Haoruo Peng, Yangqiu Song, and Dan Roth. 2016. Event detection and co-reference with minimal supervision. In Proceedings of the 2016 Conference on Empirical Methods in Natural Language Processing (EMNLP’16). pages 392–402. https://doi.org/10.18653/v1/D16-1038. Barbara Plank and Alessandro Moschitti. 2013. Embedding semantic similarity in tree kernels for domain adaptation of relation extraction. In Proceedings of the 51th Annual Meeting of the Association for Computational Linguistics (ACL’13). pages 1498–1507. http://www.aclweb.org/anthology/P131147. Mark Sammons, Haoruo Peng, Yangqiu Song, Shyam Upadhyay, Chen-Tse Tsai, Pavankumar Reddy, Subhro Roy, and Dan Roth. 2015. Illinois CCG TAC 2015 event nugget, entity discovery and linking, and slot filler validation systems. In Proceedings of Text Analytics Conference (TAC’15). Mike Schuster and Kuldip K Paliwal. 1997. Bidirectional recurrent neural networks. IEEE Transactions on Signal Processing 45(11):2673–2681. Ilya Sutskever, Oriol Vinyals, and Quoc V Le. 2014. Sequence to sequence learning with neural networks. In Advances in neural information processing systems. pages 3104–3112. Joseph Turian, Lev Ratinov, and Yoshua Bengio. 2010. Word representations: a simple and general method for semi-supervised learning. In Proceedings of the 48th Annual Meeting of the Association for Computational Linguistics (ACL’10). Association for Computational Linguistics, pages 384–394. https://doi.org/http://www.aclweb.org/anthology/P101040. Mengqiu Wang and Christopher D Manning. 2013. Effect of non-linear deep architecture in sequence labeling. In Proceedings of the Sixth International Joint Conference on Natural Language Processing (IJCNLP’13). pages 1285–1291. https://doi.org/http://www.aclweb.org/anthology/I131183. Sam Wei, Igor Korostil, Joel Nothman, and Ben Hachey. 2017. English event detection with translated language features. In Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (ACL’17). volume 2, pages 293– 298. https://doi.org/10.18653/v1/P17-2046. Dian Yu, Xiaoman Pan, Boliang Zhang, Lifu Huang, Di Lu, Spencer Whitehead, and Heng Ji. 2016. RPI BLENDER TAC-KBP2016 system description. In Proceedings of Text Analysis Conference (TAC’16). Lantao Yu, Weinan Zhang, Jun Wang, and Yong Yu. 2017. Seqgan: Sequence generative adversarial nets with policy gradient. In Proceedings of the 32nd AAAI Conference on Artificial Intelligence (AAAI’17). pages 2852–2858. Matthew D Zeiler. 2012. Adadelta: an adaptive learning rate method. arXiv preprint arXiv:1212.5701 . Yizhe Zhang, Zhe Gan, and Lawrence Carin. 2016. Generating text via adversarial training. In NIPS workshop on Adversarial Training. volume 21.
2018
48
Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Long Papers), pages 527–536 Melbourne, Australia, July 15 - 20, 2018. c⃝2018 Association for Computational Linguistics 527 Context-Aware Neural Model for Temporal Information Extraction Yuanliang Meng Text Machine Lab for NLP Department of Computer Science University of Massachusetts Lowell [email protected] Anna Rumshisky Text Machine Lab for NLP Department of Computer Science University of Massachusetts Lowell [email protected] Abstract We propose a context-aware neural network model for temporal information extraction, with a uniform architecture for event-event, event-timex and timex-timex pairs. A Global Context Layer (GCL), inspired by the Neural Turing Machine (NTM), stores processed temporal relations in the narrative order, and retrieves them for use when the relevant entities are encountered. Relations are then classified in this larger context. The GCL model uses long-term memory and attention mechanisms to resolve long-distance dependencies that regular RNNs cannot recognize. GCL does not use postprocessing to resolve timegraph conflicts, outperforming previous approaches that do so. To our knowledge, GCL is also the first model to use an NTM-like architecture to incorporate the information about global context into discourse-scale processing of natural text. 1 Introduction Extracting information about the order and timing of events from text is crucial to any system that attempts an in-depth natural language understanding, whether related to question answering, temporal inference, or other related tasks. Earlier temporal information extraction (TemporalIE) systems tended to rely on traditional statistical learning with feature-engineered task-specific models, typically used in succession (Yoshikawa et al., 2009; Ling and Weld, 2010; Sun et al., 2013; Chambers et al., 2014; Mirza and Minard, 2015). Recently, there have been some attempts to extract temporal relations with neural network models, particularly with recurrent neural networks (RNN) models (Meng et al., 2017; Cheng and Miyao, 2017; Tourille et al., 2017) and convolutional neural networks (CNN) (Lin et al., 2017). These models predominantly use token embeddings as input, avoiding handcrafted features for each task. Typically, neural network models outperform traditional statistical models. Some studies also try to combine neural network models with rule-based information retrieval methods (Fries, 2016). These systems require different models for different pair types, so several models must be combined to fully process text. A common disadvantage of all these models is that they build relations from isolated pairs of entities (events or temporal expressions). This context-blind, pairwise classification often generates conflicts in the resulting timegraph. Common ways of ameliorating the conflicts is to apply some ad hoc constraints to account for basic properties of relations (e.g. transitivity), often without considering the content of the text per se. For example, Ling and Weld (2010) designed transitivity formulae, used with local features. Sun (2014) proposed a strategy that “prefers the edges that can be inferred by other edges in the graph and remove the ones that are least so”. Another approach is to use the results from separate classifiers to rank results according to their general confidence (Mani et al., 2007; Chambers et al., 2014). High-ranking results overwrite low-ranking ones. Meng et al. (2017) used a greedy pruning algorithm to remove weak edges from the timegraph until it is coherent. When humans read text, we certainly do not follow the procedure of interpreting interpret relations only locally first, and later come up with a compromise solution that involves all the entities. Instead, if local information is insufficient, we consider the relevant information from the wider context, and resolve the ambiguity as soon as possible. The resolved relations are stored in our 528 memory as “context” for further processing. If the later evidence suggests our early interpretation was wrong, we can correct it. This paper proposes a model to simulate such mechanisms. Our model introduces a Global Context Layer (GCL), inspired by the Neural Turing Machine (NTM) architecture (Graves et al., 2014), to store processed relations in narrative order, and retrieve them for use when related entities are encountered. The stored information can also be updated if necessary, allowing for self-correction. This paper’s contributions are as follows. To our knowledge, this is the first attempt to use neural network models with updateable external memory to incorporate global context information for discourse-level processing of natural text in general and for temporal relation extraction in particular. It gives a uniform treatment of all pairs of temporally relevant entities. We obtain stateof-the-art results on TimeBank-Dense, which is a standard benchmark for TemporalIE. 2 Dataset We train and evaluate our model on TimeBankDense1 (Chambers et al., 2014). There are 6 classes of relations: SIMULTANEOUS, BEFORE, AFTER, IS INCLUDED, INCLUDES, and VAGUE TimeBank-Dense annotation aims to approximate a complete temporal relation graph by including all intra-sentential relations, all relations between adjacent sentences, and all relations with document creation time. TimeBank-Dense is one of the standard benchmarks for intrinsic evalution of TemporalIE systems. We follow the experimental setup in Chambers et al. (2014), which splits the corpus into training/validation/test sets of 22, 5, and 9 documents, respectively. Previous publications often use the micro-averaged F1 score, which is equivalent to accuracy in this case. We also rely on the micro-averaged F1 score for model selection and evaluation. Following Meng et al. (2017), we augment the data by flipping all pairs, except for relations involving document creation time (DCT). In other words, if a pair (ei, ej) exists, we add (ej, ei) to the dataset with the opposite label (e.g. BEFORE becomes AFTER). The augmentation applies to the validation and test sets also. In the final evaluation, a double-checking technique picks one result from 1https://www.usna.edu/Users/cs/ nchamber/caevo/#corpus the two-way classification, based on output scores. The dataset is heavily imbalanced. The training set has as much as 44.1% VAGUE labels, whereas only 1.8% labels are SIMULTANEOUS. We did not do any up-sampling or down-sampling. 3 System Our system has two main components. The first one is a pairwise relation classifier, and the other is the Global Context Layer (GCL). The pairwise relation classifier follows the architecture designed by Meng et al. (2017), which used the dependency paths to the least common ancestor (LCA) from each entity as input. We train the first component first, and then assemble them in a combined neural network to continue training. Fig. 1 gives an overview of the system. Figure 1: System overview. Originally, the pre-trained system has one more dense layer and an output layer, but they are truncated before combination. The max pooling layers on top of each Bi-LSTM layers are omitted here. 3.1 Global Context Layer The Global Context Layer (GCL) we propose is inspired by the Neural Turing Machine (NTM) architecture, which is an extension of a recurrent neural network with external memory and an attention mechanism for reading and writing to that memory. NTM has been shown to perform basic tasks such as copying, sorting, and associative recall (Graves et al., 2014). The external memory not only enables a large (theoretically infinite) capacity for information storage, but also allows flexible access based on attention mechanisms. Essentially, GCL is a specialized form of NTM, which eliminates some parameters to facilitate training, and specializes some functions to impose restrictions. While not as powerful as the canoni529 cal NTM, it is more suitable for the task of retaining and updating global context information. 3.1.1 Motivation Vanilla RNNs struggle with capturing longdistance dependencies. Gated RNNs such as LSTM have trainable gates to address the “vanishing and exploding gradient” problem (Hochreiter and Schmidhuber, 1997). At each time step, it chooses what to memorize and forget, so patterns over arbitrary time intervals can be recognized. However, the memory in LSTM is still short-term. No matter how long the cell states keep certain information, once it is forgotten, it gets lost forever. Such a mechanism suffices for modeling contiguous sequences. For example, sentences are naturally fit units for such models, since a sentence starts only after the preceding sentence is finished, and LSTM may be an adequate tool to process sentences. However, when the sequences are not contiguous, as in temporal and other discourse-scale relations, LSTM models do not have the capability to look for input pieces across sequences. When humans read text, discourse-level information is often distributed across the full scope of the text. To fully understand an article, we must be able to organize the processed information across sentences and paragraphs. In particular, to interpret temporal relations between entities in a sentence, sometimes we also look at relations with other entities elsewhere in the text. Such entities or relations form no regular sequences, and only a system with long-term memory as well as attention mechanisms can process them. An NTM-like architecture has an external memory with attention mechanisms, so it is an ideal candidate for such tasks. Furthermore, unlike the models that use attention over inputs (Vinyals et al., 2015; Kumar et al., 2016), NTM-like models are capable of updating previously stored representations. We describe below the GCL architecture that we use to store and update the global context information. 3.1.2 Reading The input to the GCL layer is a concatenation of three layers from the pairwise neural network. Two of these are the entity context representation layers, encoded by the two LSTM branches. The other is the penultimate hidden layer before the output layer, which encodes the relation. We can write them as [e1, e2, x]. The context representations are used as “keys” to uniquely identify the Figure 2: GCL computing attention weights. Input entity representations are compared to the Key section of GCL memory. Slots with the same or similar entities get more attention. entities. Note that we use flat context embeddings, rather than dependency path embeddings, because dependency paths tend to be short and will also vary for the same entity, depending on the other entity in the pair. As such, they do not provide a unique way to represent an entity. The original design of NTM has a complex addressing mechanism for reading, which also makes it difficult to train. An important difference in GCL is that we separate the “key” component from the “content” component of memory. Each memory slot S[i] consists of [K[i]; M[i]], where S is the whole memory with n slots, i ≤n is the index, K is the key and M is the content. Addressing is only performed on the key component. The key component stores the representation of two entities, provided by the layers encoding the flat entity context. K[i] = eM1[i] ⊕eM2[i] (1) Here ⊕is the concatenation operator. In the GCL model, the read head computes a reading weight Wn×1 from the input entity representations e1, e2 and the entity representations eM1, eM2 in memory (i.e., the keys in each memory slot). The first step is to compute the distance between current input and the memory columns, as shown in Eq. 2. D[i] is the Euclidean distance between the input key and the memory key of slot M[i]. D′[i] is computed after flipping the two entities. We do so because the order of entities in a pair should not affect their relevance. D[i] = 1 Z ||e1 ⊕e2 −eM1[i] ⊕eM2[i]||2 2 D′[i] = 1 Z′ ||e2 ⊕e1 −eM1[i] ⊕eM2[i]||2 2 (2) 530 where Z = P i D[i] is the normalization factor, and so is Z′ for the flipped case. The reading weight is then calculated as in Eq. 3, where 1n×1 is a vector of all 1’s. W[i] = max(softmax(1 −D)[i], softmax(1 −D′)[i]) (3) Every element of W represents the relevance of the corresponding memory slot (see Fig. 2). Often it is still too blurred and needs to be further sharpened as in Eq. 4. Here β is a positive number. W β is a point-wise exponential function by power of β. A large β allows “winner takes all”, so only the most relevant memory slots are read. W read = softmax(W β) (4) Parameter β could be a constant, or could be trainable. Our model computes it from the current input xt and the previous output ht−1, and thus it varies in each time step. Wsharp and bsharp are trainable weights and bias, cβ is a constant, and ReLU is the rectified linear function. βt = ReLU(Wsharp[xt, ht−1]+bsharp)+cβ (5) With the sharpened reading weight vector, we are able to obtain the read vector r1×m from M as a weighted sum, as in Eq. 6. r = X i W read[i]M[i] (6) Generally speaking, the depth of memory M should be large enough to allow sparse encoding, so that crucial information is not lost after the summation. The read vector then contains contextual information relevant to current input. Both the read vector and the current input are fed to the controller, yielding GCL output. Unlike the canonical NTM, the CGL model does not have a trainable gate interpolating the W t computed at time t, with W t−1 computed at previous time t−1. The weight vector is not passed to next time step, so the attention has no “inertia”. We tried two variants of the controller: (a) statetracking, with an LSTM layer, and (b) stateless, with a dense layer. An LSTM controller has an internal state, and also has gates to select input and output. If the input data and/or the read vector from M have regular patterns with respect to time steps, an LSTM controller would be a better choice. For the specific task of temporal relation extraction, we saw no difference in performance. 3.1.3 Writing The controller produces an output ht, which is sent to the next layer and also used to update M. Similar to reading, the first step of writing procedure is to compute an attention weight vector over the slots of M. As described above, the reading procedure computes a weighted sum over slots of M. The writing procedure writes a weighted ht to each slot. The attention mechanism here is de facto a soft addressing mechanism. The slots with a higher attention value will be the addresses which will get more of an update. The same weight vector W computed as shown in Eq. 3 is used for writing. However, an additional operation is introduced for writing. Recall that the weights are computed from entity representations. If the input entities are e1 and e2, the weight vector should have high values in the slots corresponding to e1 and/or e2. But we may not always want relevant memory slots to be overwritten. Instead, additional information can be written to a different slot. Additionally, when M is relatively empty, as at the beginning, the addressing mechanism may treat all slots equally, and uniformly update all slots in the same way. In this case we want the weight vector to shift each time, so M can diversify fast. Therefore we use a shift function similar to the canonical NTM. The idea is to compute a shifted weight vector f W by convolving W with a shift kernel s which maps a shift distance to a probability value. For example, s(−1) = 0.2, s(0) = 0.5, s(1) = 0.3 means the probabilities of shifting left, no shifting, and shifting right are 0.2, 0.5, 0.3, respectively. Generally speaking, we want s to give zeros for most shift distances, so the shifting operation is limited to a small range. f W[i] = n−1 X j=0 W[j]s[i −j] (7) At each time step, the shift kernel depends on current input and output. If the allowed shift range is [-s/2, +s/2], we train a weight Ws and bias bs to calculate the shift weights Cs×1, Ct = softmax(Ws[xt, ht] + bs) (8) Then the weights are mapped to a circulant kernel to perform the convolution in Eq. 7, the final output is f W. Finally, the sharpening still needs to be applied. For the writing procedure, both addressing and 531 shifting are “soft” in nature, and thus could yield a blurred outcome. Again, we train the weights to obtain a sharpening parameter γ each time, and perform softmax over f W. γt = ReLU(Wsharp[xt, ht] + bsharp) + cγ (9) W write = softmax(f W γ) (10) f W γ is the point-wise exponential function, over the shifted weight vector. cγ is a positive constant. The original NTM model has gates for interpolating f W γ at the current time with the one computed at the previous time step, but we omit this operation. We also omit the erase vector and the add vector, so W write fully controls what to overwrite in M and what to retain. As a result, the writing operation can be expressed as: Mt[i] = Mt−1[i]+W write[i](ht−Mt−1[i]) (11) The first term in Eq. 11 is the memory in the previous time step, and the second term is the update. We update the keys in the same way. As we can see, the keys come from entity representations, but are not exactly the same, due to W write. Kt[i] = Kt−1[i] + W write[i](e1 ⊕e2 −Kt−1[i]) (12) 3.1.4 GCL vs. Canonical NTM We highlight below some major differences between the canonical NTM and the GCL model. Typically, NTM computes the keys from input and output for accessing different memory addresses. In GCL, the keys are simply the entity representations [e1, e2] from input, in either order. The key function effectively involves slicing and flipping the input. Further discussion of the differences between the GCL addressing mechanism and some of the other NTM variations is provided in Sec. 5. Another major difference is that we do not use any gates to interpolate the attention vector at the current time step with the one from the previous time step. Instead, the previous attention vector is totally ignored. Since we do not compute the erase vector or the add vector, this allows the attention vector to fully control memory updates. In addition, we unified the trainable weights for calculating β and γ at each time step. We found these parameters not to be crucial, and setting them to be constant does not affect the results. We also do not shift attention for reading. A possible advantage of shifting attention is that neighboring slots of the focus can also be accessed, providing a way to simulate associative recall. This is based on the fact that the writing procedure tends to write similar memories close to each other. However, in this study we want the reading procedure to be restricted. Associative recall can be realized from attention vector itself, without shifting. 3.2 Pairwise Classification Model The pairwise model classifies individual entity pairs, where entities are events and time expressions (timexes). In other words, for each pair, we only use the local context, and the relation of one pair does not affect the classification results for other pairs. We follow the architecture proposed in Meng et al. (2017), but with the following changes: (1) all three types of pairs are handled by the same neural network, rather than by three separately trained models; (2) the neighboring words (a flat context) of entity mentions are used to generate input, in addition to words on syntactic dependency paths; (3) all timex-timex pairs are included as well, not only event-timex and event-event pairs; (4) every pair is assigned a 3-dimensional “time value”, to approximate the rule-based approach when possible. 3.2.1 Event Pairs and Event-Timex Pairs TimeBank-Dense dataset labels three types of pairs: intra-sentence, cross-sentence and document creation time (DCT). For intra-sentence pairs and cross-sentence pairs, we follow Meng et al. (2017). The shortest dependency path between the two entities is identified, and the word embeddings from the path to the least common ancestor for each entity are processed by two LSTM branches, with a separate max pooling layer for each branch. Path to the root is used for cross-sentence relations. For relations with the DCT, we use a single word now as a placeholder for the DCT branch. Unlike Meng et al. (2017), we allow the model to accept all three pair types, with a “pair type” feature as a component of input, defined as an integer with the value 1, -1 or 0, respectively. In addition to the shortest dependency path, our model also uses a flat local context window, that is, the words around each entity mention, regardless of syntactic structures. For an entity starting with word wi, the local context window is 5 words to the left and 10 words to its right i.e. wi−5wi−4...wiwi+1...wi+10. The windows are cut 532 short at the edge of a sentence, or when the second entity in encountered. By using this context window, the words between two entities are often used twice by the system, and thus given more consideration. To inform the system of other entity mentions, we also add special input tokens at the locations where events and timexes are tagged. The embeddings of the special tokens are uniformly initialized, and automatically tuned during the training process. 3.2.2 Timex Pairs The method described in Meng et al. (2017) classifies timex pairs by handcrafted rules and then adds them to the final results prior to postprocessing. Since timexes have concrete time values, a rule-based method would seem appropriate. However, since our model uses global context to help classify relations and timex-timex pairs enrich the global context representation, we design a way for a common classifier model to handle such pairs. When DCT is not involved, timex pairs are created the same way as cross-sentence pairs, that is, path to the root is used for each entity. DCT is represented by the placeholder word now. In addition to the word-based representations, another input vector is used to simulate the rule-based approach, to be explained next. 3.2.3 Time Value Vectors Every timex tag has a time value, following the ISO-8601 standard. Every value can be mapped to a 2D vector of real values (start, end). For a pair we use the subtraction of the vectors to represent the difference. Suppose we have timexes in below: THE HAGUE, Netherlands (AP)_ The World Court <TIMEX3 tid="t21" type="DATE" value="1998-02-27" temporalFunction="true" functionInDocument="NONE" anchorTimeID="t0">Friday</TIMEX3> rejected U.S. and British objections to a Libyan World Court case that has the trial of two Libyans suspected of blowing up a Pan Am jumbo jet over Scotland in <TIMEX3 tid="t22" type="DATE" value="1988" temporalFunction="false" functionInDocument="NONE">1988</TIMEX3>. The first timex can be represented as (1998 + 1/12 + 26/365, 1998 + 1/12 + 26/365) = (1998.155, 1998.155), and the second one (1988, 1988 + 364/365) = (1988, 1988.997). The difference of the values are put in the sign function, to obtain the representation: (sign(1988 - 1998.155), sign(1988.997 - 1998.155)) = (-1, -1). Vector (-1, -1) clearly indicates the AFTER relation between t21 and t22. We set the minimum interval to be a day, which is generally sufficient for our data. The DURATION timexes are not considered, and wordbased input vectors are used to represent them. In order to make all the input data have the same shape, we assign the time value vector to all pairs, even if a timex is not involved. For non-timex pairs, a vector (-1, 0, 0) is used. The first element -1 to indicate a “pseudo” time value. Real timex pairs have the first value of 1, so the example we just discussed would be assigned a vector (1, -1, 1). The time value vectors allow the model to take advantage of rule-based information. 3.3 Combining Two Components We tried training the two components in a combined system, but found it slow to converge. In our experiments, we trained the pairwise model first, froze it, and then combined it with the GCL layer to train the GCL. This method also helps us observe whether the GCL component alone improves results, given the same input. We tried combining the systems in two ways. One is to connect the output layer of the pretrained model to GCL, and the other is to slice the pre-trained model and connect its hidden layer to GCL. All the GCL layers are bi-directional, averaging forward and backward passes. By connecting the output layer, which has a softmax activation, we hand the final decisions made by the pairwise model to GCL. On the other hand, the hidden layer provides higher layers with cruder but richer information. We found that the latter performs better. It is also possible to train the two components together from scratch. In this case, the learning rate has to be set much lower to assure convergence, and the training requires more epochs. 4 Experiments For all the experiments, hyperparameters including the number of epochs are tuned with the validation set only. Training data is segmented into chunks. Each chunk contains relation pairs in the narrative order. The size of chunks is randomly chosen from [40, 60, 80, 120, 160] at the beginning of each epoch of training. The GCL maintains a memory for each chunk, and clears it at the end of a chunk. The idea here is to train the model on short paragraphs to avoid overfitting. To introduce further randomness, the chunks are rotated for each epoch. For a specific training file, if chunk i starts with pair ni in epoch 1, in epoch 533 2, chunk i will start with pair ni+chunksize+11. 11 is a prime number we chose to assure each epoch observes different compositions of chunks. By doing the rotation, some pairs in the final chunk of epoch 1 will show up in the first chunk in epoch 2 as well. However, within each chunk, we do not randomize pairs, so narrative order is preserved at this level. We also do not shuffle the chunks, but only rotate them. Evaluation on the test set uses only one chunk for each file (chunk size is the number of pairs). Each relation pair is only processed once, without “multiple rounds of reading”. Thus, we essentially train the model to read shorter paragraphs (varied in length), but test it on long articles. 4.1 Pairwise Model As described in Section 3.2, the pairwise classifier has the following input vectors: left and right shortest path branches, two flat context vectors, a pair type flag, and a time value vector. Word embeddings are initialized with glove.840B.300d word vectors2, and set to be trainable. The BiLSTM layers are followed by max-pooling. The two hidden layers have size 512 and 128, respectively. We train this model for 40 epochs, using the RMSProp optimizer (Tieleman and Hinton, 2012). The learning rate is scheduled as lr = 2 × 10−3 × 2−n 5 , where n is the number of epochs. The middle block of Table 1 shows the performance of the pairwise model after applying double-checking. Since all pairs are flipped, double-checking combines results from (ei, ej) and (ej, ei), picking the label with the higher probability score, which typically boosts performance. The results without double-checking show similar trends. 4.2 GCL model After training the pairwise model, we combine it with GCL. Unless otherwise indicated, the results reported in this section use the model configuration that connects the hidden layer (rather than the output layer) of the pairwise model with a bidirectional GCL layer. The bidirectional GCL is realized as the average of a forward GCL and a backward GCL, each producing a sequence. Then two more hidden layers are put on top of it, followed 2https://nlp.stanford.edu/projects/glove/ 3This result does not include timex-timex pairs, which is 3% of total test instances. Model Micro-F1 Macro-F1 CAEVO (not NN model) .507 CATENA (not NN model) .511 Cheng et al. 2017 .5203 Meng et al. 2017 .519 pairwise .535 .528 Two more hidden layers .539 .532 GCL w/ state-tracking controller .545 .538 GCL w/ stateless controller .546 .538 GCL w/ pre-trained output layer .541 .536 Table 1: Results on the test set. The GCL models use the same hyperparameters, if possible. The two models on the top do not use neural networks. The results in the two lower blocks all use double-check. “Two more hidden layers” means adding two dense layers on top of the pre-trained model without using GCL. The last row corresponds to connecting the output layer of a pre-trained model to GCL layers with stateless controller. by an output layer. All the layers in the pre-trained pairwise model are set to be untrainable. The two trainable hidden layers have sizes 512 and 128, respectively, with ReLU activtion and 0.3 dropout after each one. The GCLs have 128 memory slots. Learning rate is scheduled as lr = 2×10−4×2−n 2 . In the experiments, we found the models converge quite fast with respect to the number of epochs. It is not surprising because the lower layers are already well trained, and frozen (no updating). After the 5th epoch, the training accuracy typically reaches 0.95. We stop training after 10 epochs. The bottom block of Table 1 presents the results, showing that all models from the present paper outperform existing models from the literature. One may argue the combined system adds more hidden layers over a pre-trained model, which contributes to the improvement in performance. We show a comparison to a baseline model which adds two dense layers on top of the pairwise model, without the GCL. The configuration of the two layers is the same as we used for the GCL models. The result shows that the performance is slightly higher than what we get from the pairwise model, but the difference is smaller than what we get from GCL models – suggesting that the performance improvement with GCL models is not just due to more parameters. We also tried adding an LSTM layer on top of the pre-trained model, and found the system cannot converge. It again confirms that GCL is more powerful than LSTM in handling irregular time series. We found no difference in performance between the stateless controller and state-tracking controller. Connecting the output layer of the pre534 trained model to GCL seems to generate weaker results than connecting the hidden layer, although it also outperforms the pairwise model, and all previous models in literature. We performed significance testing to compare the pairwise model and the GCL-enabled model. A paired one-tailed t-test shows the results from the GCL model are significantly higher than results from pairwise model (p-value 0.0015). While significant, the improvement is relatively small, we believe due in part to the small size of Timebank-Dense dataset. 4.3 Case Study To illustrate the difference in performance of the pairwise model and the GCL model, we created a sample paragraph in which long-distance dependencies and references to DCT are needed to resolve some of the temporal relations: John met Mary in Massachusetts when they attended the same university. They are getting married in 2019, 2 years after their graduation. But this year, they have relocated to New Hampshire. We created the gold standard annotation for this text with 5 events, 2 timexes, and 24 TLINKs (see appendix)4. We set the DCT to an arbitrary date “2018-04-01”. There are no VAGUE or SIMULTANEOUS relations. For this paragraph, the pairwise model yields an accuracy (i.e. micro-averaged F1) of 0.292, while the GCL-enabled model yields 0.417. Overall, the GCL-enabled model assigns 6 VAGUE labels while the pairwise assigns 11. It reflects the fact that GCL tries to infer relations from otherwise vague evidence. For example, it is difficult to infer the relation between met and 2019 from the local context (without DCT, particularly), so the pairwise model labels it as VAGUE, while the GCL-enabled model correctly assigns BEFORE. Recall that the GCL is placed on top of a pretrained pairwise model, so the mistakes made by the pairwise model propagate to GCL. For example, the pairwise model incorrectly classifies 2019 as BEFORE graduation – perhaps, due to a somewhat unusual syntax. But the GCL-enabled system assigns it a VAGUE label, probably as a way to compromise. In the TimeBank-Dense test data, VAGUE cases dominate, which may have made it more difficult for GCL to assign proper labels. In the future, we believe it may be better to omit 4Note that in TimeBank-Dense, no TLINKS are associated DURATION timexes, so 2 years is not annotated writing (and reading) the VAGUE relations to/from GCL. 4.4 Error Analysis Table 2 shows the overall performance for each relation using the GCL system with the stateless controller. Since we flip pairs and use doublechecking to pick one result for each pair, BEFORE/AFTER and IS INCLUDED/INCLUDES are actually treated in the same way, respectively. Here we map the results to original pairs, in order to compare to other systems. Predicted labels SIMUL BEF AFT IS INCL INCL VAG Total SIMUL 10 0 9 2 1 17 39 BEF 0 327 27 15 5 215 589 AFT 1 26 208 4 5 184 428 IS INCL 1 27 3 59 2 67 159 INCL 0 16 9 2 19 70 116 VAG 1 171 87 28 17 596 900 Table 2: Overall results per relation. As the table shows, the VAGUE relation causes the most trouble. It is not only because VAGUE is the largest class, but also because it is often semantically ambiguous, so even human experts have low inter-annotator agreement. If we allow a relatively sparse labeling of data, and use other evaluation methods (e.g. question answering), the VAGUE class is not likely to have similar effects. We also break down the results according to the types of pairs. Compared to other systems, our approach has a big advantage for event-event (EE) pairs, which is by far the most common (64%) relation pairs for all data, and also requires more complex natural language understanding. ComSystems E-D E-E E-T Overall Frequency 14% 64% 19% 97% CAEVO .553 .494 .494 .502 CATENA .534 .519 .468 .512 Cheng et al. 2017 .546 .529 .471 .520 GCL .489 .570 .487 .542 Table 3: Results on the E-D, E-D and E-T pairs. GCL stands for the GCL-enabled system with a stateless controller. Frequencies are percentages in the test set. T-T pairs are not shown here. CAEVO is from Chambers et al. (2014). CATENA is from Mirza and Tonelli (2016) paired to CAEVO, our performance on event-DCT (E-D) and event-timex (E-T) pairs is not that great. CAEVO uses engineered features such as entity attributes, temporal signals, and semantic information from WordNet, which seems to work well in these two cases. We took a closer look at our E-D 535 results, and found that the relatively low performance is mainly caused by misclassifying VAGUE as AFTER. As Table 4 shows, among the 72 Predicted labels SIMUL BEF AFT IS INCL INCL VAG SIMUL 0 0 0 0 0 0 BEF 0 57 11 15 6 37 AFT 0 3 36 0 0 10 IS INCL 0 11 1 31 1 12 INCL 0 0 2 1 3 2 VAG 0 4 20 9 14 25 Table 4: Test results from event and document creation time (E-D) pairs. The rows are true labels and the columns are predicted labels. VAGUE relations in E-D pairs, 20 are labeled AFTER by our system. In a news article, most events occur before the DCT i.e. the time when the article was written. If the temporal relation is vague, our system tends to guess that the event occurs after the DCT. It is interesting because AFTER only accounts for 16% of all E-D pairs in test data (and about the same in training data), behind BEFORE (41%), VAGUE (21%), and IS INCLUDED (18%). However, E-D is a relatively small category with only 311 instances in the test set, so it is difficult to draw any a substantive conclusion in this case. Recall that our model has a uniform architecture for all input types and is trained on eventevent, event-timex and event-DCT pairs simultaneously. As a result, its performance is not optimal for some lower-frequency pair types. Tuning the model for each pair type separately, as well as resampling to deal with class imbalance would, perhaps, improve performance. However, the point of these experiments was not to get the largest improvement, but to show that the GCL mechanism can replace heuristic-based timegraph conflict resolution, improving the performance of an otherwise very similar model. 5 Related Work While the GCL model is inspired by NTM, other NTM variants have also been proposed recently. Zhang et al. (2015) proposed structured memory architectures for NTMs, and argue they could alleviate overfitting and increase predictive accuracy. Graves et al. (2016) proposed a memory access mechanism on top of NTM, which they call Differentiable Neural Computer (DNC). DNC can store the transitions between memory locations it accesses, and thus can model some structured data. G¨ulc¸ehre et al. (2016) proposed a Dynamic Neural Turing Machine (D-NTM) model, which allows discrete access to memory. G¨ulc¸ehre et al. (2017) further simplified the addressing algorithm, so a single trainable matrix is used to get locations for read and write. Both models separate the address section from the content section of memory, as do we. We came up with the idea independently, noting that the content-based addressing in the canonical NTM model is difficult to train. A crucial difference between GCL and these models is that they use input “content” to compute keys. In GCL, the addressing mechanism fully depends on the entity representations, which are provided by the context encoding layers and not computed by the GCL controller. Addressing then involves matching the input entities and the entities in memory. Other than NTM-based approaches, there are models that use an attention mechanism over either input or external memory. For instance, the Pointer Networks (Vinyals et al., 2015) uses attention over input timesteps. However, it has no power to rewrite information for later use, since they have no “memory” except for the RNN states. The Dynamic Memory Networks (Kumar et al., 2016) has an “episodic memory” module which can be updated at each timestep. However, the memory there is a vector (“episode”) without internal structure, and the attention mechanism works on inputs, just as in Pointer Networks. Our GCL model and other NTM-based models have a memory with multiple slots, and the addressing function (attention) dictates writing and reading to/from certain slots in the memory. 6 Conclusion We have proposed the first context-aware neural model for temporal information extraction using an external memory to represent global context. Our model introduces a Global Context Layer which is able to save and retrieve processed temporal relations, and then use this global context to infer new relations from new input. The memory can be updated, allowing self-correction. Experimental results show that the proposed model beats previous results without resorting to ad-hoc resolution of timegraph conflicts in postprocessing. Acknowledgments This project is funded in part by an NSF CAREER award to Anna Rumshisky (IIS-1652742). 536 References Nathanael Chambers, Taylor Cassidy, Bill McDowell, and Steven Bethard. 2014. Dense event ordering with a multi-pass architecture. Transactions of the Association for Computational Linguistics, 2:273– 284. Fei Cheng and Yusuke Miyao. 2017. Classifying temporal relations by bidirectional lstm over dependency paths. In ACL. Jason Alan Fries. 2016. Brundlefly at semeval-2016 task 12: Recurrent neural networks vs. joint inference for clinical temporal information extraction. CoRR, abs/1606.01433. Alex Graves, Greg Wayne, and Ivo Danihelka. 2014. Neural turing machines. CoRR, abs/1410.5401. Alex Graves, Greg Wayne, Malcolm Reynolds, Tim Harley, Ivo Danihelka, Agnieszka GrabskaBarwinska, Sergio Gomez Colmenarejo, Edward Grefenstette, Tiago Ramalho, John Agapiou, Adri`a Puigdom`enech Badia, Karl Moritz Hermann, Yori Zwols, Georg Ostrovski, Adam Cain, Helen King, Christopher Summerfield, Phil Blunsom, Koray Kavukcuoglu, and Demis Hassabis. 2016. Hybrid computing using a neural network with dynamic external memory. Nature, 538(7626):471– 476. C¸ aglar G¨ulc¸ehre, Sarath Chandar, and Yoshua Bengio. 2017. Memory augmented neural networks with wormhole connections. CoRR, abs/1701.08718. C¸ aglar G¨ulc¸ehre, Sarath Chandar, Kyunghyun Cho, and Yoshua Bengio. 2016. Dynamic neural turing machine with soft and hard addressing schemes. CoRR, abs/1607.00036. Sepp Hochreiter and J¨urgen Schmidhuber. 1997. Long short-term memory. Neural Computation, 9(8):1735–1780. Ankit Kumar, Ozan Irsoy, Peter Ondruska, Mohit Iyyer, James Bradbury, Ishaan Gulrajani, Victor Zhong, Romain Paulus, and Richard Socher. 2016. Ask me anything: Dynamic memory networks for natural language processing. In Proceedings of The 33rd International Conference on Machine Learning, volume 48 of Proceedings of Machine Learning Research, pages 1378–1387, New York, New York, USA. PMLR. Chen Lin, Timothy A. Miller, Dmitriy Dligach, Steven Bethard, and Guergana Savova. 2017. Representations of time expressions for temporal relation extraction with convolutional neural networks. In BioNLP 2017, Vancouver, Canada, August 4, 2017, pages 322–327. Xiao Ling and Daniel S. Weld. 2010. Temporal information extraction. In Proceedings of the TwentyFourth AAAI Conference on Artificial Intelligence, AAAI 2010, Atlanta, Georgia, USA, July 11-15, 2010. Inderjeet Mani, Ben Wellner, Marc Verhagen, and James Pustejovsky. 2007. Three approaches to learning tlinks in timeml. Technical Report CS-07– 268, Computer Science Department. Yuanliang Meng, Anna Rumshisky, and Alexey Romanov. 2017. Temporal information extraction for question answering using syntactic dependencies in an lstm-based architecture. In Proc. of the conference on empirical methods in natural language processing (EMNLP). P Mirza and S Tonelli. 2016. Catena: Causal and temporal relation extraction from natural language texts. In The 26th International Conference on Computational Linguistics, pages 64–75. Association for Computational Linguistics. Paramita Mirza and Anne-Lyse Minard. 2015. Hlt-fbk: a complete temporal processing system for qa tempeval. In Proc. of the 9th International Workshop on Semantic Evaluation (SemEval 2015), pages 801– 805. Association for Computational Linguistics. Weiyi Sun. 2014. Time Well Tell: Temporal Reasoning in Clinical Narratives. PhD dissertation. Department of Informatics, University at Albany, SUNY. Weiyi Sun, Anna Rumshisky, and Ozlem Uzuner. 2013. Evaluating temporal relations in clinical text: 2012 i2b2 challenge. Journal of the American Medical Informatics Association, 20(5):806–813. T Tieleman and G Hinton. 2012. Lecture 6.5-rmsprop: Divide the gradient by a running average of its recent magnitude. COURSERA: Neural networks for machine learning, 4(2):26–31. Julien Tourille, Olivier Ferret, Aurelie Neveol, and Xavier Tannier. 2017. Neural architecture for temporal relation extraction: A bi-lstm approach for detecting narrative containers. In Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (Volume 2: Short Papers), pages 224–230, Vancouver, Canada. Association for Computational Linguistics. Oriol Vinyals, Meire Fortunato, and Navdeep Jaitly. 2015. Pointer networks. In C. Cortes, N. D. Lawrence, D. D. Lee, M. Sugiyama, and R. Garnett, editors, Advances in Neural Information Processing Systems 28, pages 2692–2700. Curran Associates, Inc. Katsumasa Yoshikawa, Sebastian Riedel, Masayuki Asahara, and Yuji Matsumoto. 2009. Jointly identifying temporal relations with markov logic. In Proceedings of the Joint Conference of the 47th Annual Meeting of the ACL and the 4th International Joint Conference on Natural Language Processing of the AFNLP: Volume 1 - Volume 1, ACL ’09, pages 405– 413, Stroudsburg, PA, USA. Association for Computational Linguistics. Wei Zhang, Yang Yu, and Bowen Zhou. 2015. Structured memory for neural turing machines. CoRR, abs/1510.03931.
2018
49
Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Long Papers), pages 46–55 Melbourne, Australia, July 15 - 20, 2018. c⃝2018 Association for Computational Linguistics 46 Unsupervised Neural Machine Translation with Weight Sharing Zhen Yang1,2, Wei Chen1 , Feng Wang1,2∗, Bo Xu1 1Institute of Automation, Chinese Academy of Sciences 2University of Chinese Academy of Sciences {yangzhen2014, wei.chen.media, feng.wang, xubo}@ia.ac.cn Abstract Unsupervised neural machine translation (NMT) is a recently proposed approach for machine translation which aims to train the model without using any labeled data. The models proposed for unsupervised NMT often use only one shared encoder to map the pairs of sentences from different languages to a shared-latent space, which is weak in keeping the unique and internal characteristics of each language, such as the style, terminology, and sentence structure. To address this issue, we introduce an extension by utilizing two independent encoders but sharing some partial weights which are responsible for extracting high-level representations of the input sentences. Besides, two different generative adversarial networks (GANs), namely the local GAN and global GAN, are proposed to enhance the cross-language translation. With this new approach, we achieve significant improvements on English-German, EnglishFrench and Chinese-to-English translation tasks. 1 Introduction Neural machine translation (Kalchbrenner and Blunsom, 2013; Sutskever et al., 2014; Cho et al., 2014; Bahdanau et al., 2014), directly applying a single neural network to transform the source sentence into the target sentence, has now reached impressive performance (Shen et al., 2015; Wu et al., 2016; Johnson et al., 2016; Gehring et al., 2017; Vaswani et al., 2017). The NMT typically consists of two sub neural networks. The encoder network reads and encodes the source sentence into a 1Feng Wang is the corresponding author of this paper context vector, and the decoder network generates the target sentence iteratively based on the context vector. NMT can be studied in supervised and unsupervised learning settings. In the supervised setting, bilingual corpora is available for training the NMT model. In the unsupervised setting, we only have two independent monolingual corpora with one for each language and there is no bilingual training example to provide alignment information for the two languages. Due to lack of alignment information, the unsupervised NMT is considered more challenging. However, this task is very promising, since the monolingual corpora is usually easy to be collected. Motivated by recent success in unsupervised cross-lingual embeddings (Artetxe et al., 2016; Zhang et al., 2017b; Conneau et al., 2017), the models proposed for unsupervised NMT often assume that a pair of sentences from two different languages can be mapped to a same latent representation in a shared-latent space (Lample et al., 2017; Artetxe et al., 2017b). Following this assumption, Lample et al. (2017) use a single encoder and a single decoder for both the source and target languages. The encoder and decoder, acting as a standard auto-encoder (AE), are trained to reconstruct the inputs. And Artetxe et al. (2017b) utilize a shared encoder but two independent decoders. With some good performance, they share a glaring defect, i.e., only one encoder is shared by the source and target languages. Although the shared encoder is vital for mapping sentences from different languages into the shared-latent space, it is weak in keeping the uniqueness and internal characteristics of each language, such as the style, terminology and sentence structure. Since each language has its own characteristics, the source and target languages should be encoded and learned independently. Therefore, we conjecture that the shared encoder may be a factor limit47 ing the potential translation performance. In order to address this issue, we extend the encoder-shared model, i.e., the model with one shared encoder, by leveraging two independent encoders with each for one language. Similarly, two independent decoders are utilized. For each language, the encoder and its corresponding decoder perform an AE, where the encoder generates the latent representations from the perturbed input sentences and the decoder reconstructs the sentences from the latent representations. To map the latent representations from different languages to a shared-latent space, we propose the weightsharing constraint to the two AEs. Specifically, we share the weights of the last few layers of two encoders that are responsible for extracting highlevel representations of input sentences. Similarly, we share the weights of the first few layers of two decoders. To enforce the shared-latent space, the word embeddings are used as a reinforced encoding component in our encoders. For cross-language translation, we utilize the backtranslation following (Lample et al., 2017). Additionally, two different generative adversarial networks (GAN) (Yang et al., 2017), namely the local and global GAN, are proposed to further improve the cross-language translation. We utilize the local GAN to constrain the source and target latent representations to have the same distribution, whereby the encoder tries to fool a local discriminator which is simultaneously trained to distinguish the language of a given latent representation. We apply the global GAN to finetune the corresponding generator, i.e., the composition of the encoder and decoder of the other language, where a global discriminator is leveraged to guide the training of the generator by assessing how far the generated sentence is from the true data distribution 1. In summary, we mainly make the following contributions: • We propose the weight-sharing constraint to unsupervised NMT, enabling the model to utilize an independent encoder for each language. To enforce the shared-latent space, we also propose the embedding-reinforced encoders and two different GANs for our model. • We conduct extensive experiments on 1The code that we utilized to train and evaluate our models can be found at https://github.com/ZhenYangIACAS/unsupervised-NMT English-German, English-French and Chinese-to-English translation tasks. Experimental results show that the proposed approach consistently achieves great success. • Last but not least, we introduce the directional self-attention to model temporal order information for the proposed model. Experimental results reveal that it deserves more efforts for researchers to investigate the temporal order information within self-attention layers of NMT. 2 Related Work Several approaches have been proposed to train NMT models without direct parallel corpora. The scenario that has been widely investigated is one where two languages have little parallel data between them but are well connected by one pivot language. The most typical approach in this scenario is to independently translate from the source language to the pivot language and from the pivot language to the target language (Saha et al., 2016; Cheng et al., 2017). To improve the translation performance, Johnson et al. (2016) propose a multilingual extension of a standard NMT model and they achieve substantial improvement for language pairs without direct parallel training data. Recently, motivated by the success of crosslingual embeddings, researchers begin to show interests in exploring the more ambitious scenario where an NMT model is trained from monolingual corpora only. Lample et al. (2017) and Artetxe et al. (2017b) simultaneously propose an approach for this scenario, which is based on pre-trained cross lingual embeddings. Lample et al. (2017) utilizes a single encoder and a single decoder for both languages. The entire system is trained to reconstruct its perturbed input. For cross-lingual translation, they incorporate back-translation into the training procedure. Different from (Lample et al., 2017), Artetxe et al. (2017b) use two independent decoders with each for one language. The two works mentioned above both use a single shared encoder to guarantee the shared latent space. However, a concomitant defect is that the shared encoder is weak in keeping the uniqueness of each language. Our work also belongs to this more ambitious scenario, and to the best of our knowledge, we are one among the first endeavors to investigate how to train an NMT model with monolingual corpora only. 48 sx tx s Enc s Dec t Enc t Dec Enc Dec s s sx  Enc Dec s s c Dec sxs Enc Dec t s tx  Enc Dec t s c Dec tx Enc Dec t t tx  Enc Dec t t Dec tx Enc Dec s t sx  Enc Dec s t c Dec sx 1 g D l D Z 2 g D Figure 1: The architecture of the proposed model. We implement the shared-latent space assumption using a weight sharing constraint where the connection of the last few layers in Encs and Enct are tied (illustrated with dashed lines) and the connection of the first few layers in Decs and Dect are tied. ˜xEncs−Decs s and ˜xEnct−Dect t are self-reconstructed sentences in each language. ˜xEncs−Dect s is the translated sentence from source to target and ˜xEnct−Decs t is the translation in reversed direction. Dl is utilized to assess whether the hidden representation of the encoder is from the source or target language. Dg1 and Dg2 are used to evaluate whether the translated sentences are realistic for each language respectively. Z represents the shared-latent space. 3 The Approach 3.1 Model Architecture The model architecture, as illustrated in figure 1, is based on the AE and GAN. It consists of seven sub networks: including two encoders Encs and Enct, two decoders Decs and Dect, the local discriminator Dl, and the global discriminators Dg1 and Dg2. For the encoder and decoder, we follow the newly emerged Transformer (Vaswani et al., 2017). Specifically, the encoder is composed of a stack of four identical layers 2. Each layer consists of a multi-head self-attention and a simple position-wise fully connected feed-forward network. The decoder is also composed of four identical layers. In addition to the two sub-layers in each encoder layer, the decoder inserts a third sublayer, which performs multi-head attention over the output of the encoder stack. For more details about the multi-head self-attention layer, we refer the reader to (Vaswani et al., 2017). We implement the local discriminator as a multi-layer perceptron and implement the global discriminator based on the convolutional neural network (CNN). Several ways exist to interpret the roles of the sub networks are summarised in table 1. The proposed system has several striking components , which are critical either for the system to be trained in an unsu2The layer number is selected according to our preliminary experiment, which is presented in appendix ??. pervised manner or for improving the translation performance. Networks Roles {Encs, Decs} AE for source language {Enct, Dect} AE for target language {Encs, Dect} translation source →target {Enct, Decs} translation target →source {Encs, Dl} 1st local GAN (GANl1) {Enct, Dl} 2nd local GAN (GANl2) {Enct, Decs, Dg1} 1st global GAN (GANg1) {Encs, Dect, Dg2} 2nd global GAN (GANg2) Table 1: Interpretation of the roles for the subnetworks in the proposed system. Directional self-attention Compared to recurrent neural network, a disadvantage of the simple self-attention mechanism is that the temporal order information is lost. Although the Transformer applies the positional encoding to the sequence before processed by the self-attention, how to model temporal order information within an attention is still an open question. Following (Shen et al., 2017), we build the encoders in our model on the directional self-attention which utilizes the positional masks to encode temporal order information into attention output. More concretely, two positional masks, namely the forward mask Mf and 49 backward mask Mb, are calculated as: Mf ij =  0 i < j −∞ otherwise (1) Mb ij =  0 i > j −∞ otherwise (2) With the forward mask Mf, the later token only makes attention connections to the early tokens in the sequence, and vice versa with the backward mask. Similar to (Zhou et al., 2016; Wang et al., 2017), we utilize a self-attention network to process the input sequence in forward direction. The output of this layer is taken by an upper self-attention network as input, processed in the reverse direction. Weight sharing Based on the shared-latent space assumption, we apply the weight sharing constraint to relate the two AEs. Specifically, we share the weights of the last few layers of the Encs and Enct, which are responsible for extracting high-level representations of the input sentences. Similarly, we also share the first few layers of the Decs and Dect, which are expected to decode high-level representations that are vital for reconstructing the input sentences. Compared to (Cheng et al., 2016; Saha et al., 2016) which use the fully shared encoder, we only share partial weights for the encoders and decoders. In the proposed model, the independent weights of the two encoders are expected to learn and encode the hidden features about the internal characteristics of each language, such as the terminology, style, and sentence structure. The shared weights are utilized to map the hidden features extracted by the independent weights to the shared-latent space. Embedding reinforced encoder We use pretrained cross-lingual embeddings in the encoders that are kept fixed during training. And the fixed embeddings are used as a reinforced encoding component in our encoder. Formally, given the input sequence embedding vectors E = {e1, . . . , et} and the initial output sequence of the encoder stack H = {h1, . . . , ht}, we compute Hr as: Hr = g ⊙H + (1 −g) ⊙E (3) where Hr is the final output sequence of the encoder which will be attended by the decoder (In Transformer, H is the final output of the encoder), g is a gate unit and computed as: g = σ(W1E + W2H + b) (4) where W1, W2 and b are trainable parameters and they are shared by the two encoders. The motivation behind is twofold. Firstly, taking the fixed cross-lingual embedding as the other encoding component is helpful to reinforce the sharedlatent space. Additionally, from the point of multichannel encoders (Xiong et al., 2017), providing encoding components with different levels of composition enables the decoder to take pieces of source sentence at varying composition levels suiting its own linguistic structure. 3.2 Unsupervised Training Based on the architecture proposed above, we train the NMT model with the monolingual corpora only using the following four strategies: Denoising auto-encoding Firstly, we train the two AEs to reconstruct their inputs respectively. In this form, each encoder should learn to compose the embeddings of its corresponding language and each decoder is expected to learn to decompose this representation into its corresponding language. Nevertheless, without any constraint, the AE quickly learns to merely copy every word one by one, without capturing any internal structure of the language involved. To address this problem, we utilize the same strategy of denoising AE (Vincent et al., 2008) and add some noise to the input sentences (Hill et al., 2016; Artetxe et al., 2017b). To this end, we shuffle the input sentences randomly. Specifically, we apply a random permutation ε to the input sentence, verifying the condition: |ε(i) −i| ≤min(k([steps s ] + 1), n), ∀i ∈{1, n} (5) where n is the length of the input sentence, steps is the global steps the model has been updated, k and s are the tunable parameters which can be set by users beforehand. This way, the system needs to learn some useful structure of the involved languages to be able to recover the correct word order. In practice, we set k = 2 and s = 100000. Back-translation In spite of denoising autoencoding, the training procedure still involves a single language at each time, without considering our final goal of mapping an input sentence from the source/target language to the target/source language. For the cross language training, we utilize the back-translation approach for our unsupervised training procedure. Back-translation has shown its great effectiveness on improving NMT 50 model with monolingual data and has been widely investigated by (Sennrich et al., 2015a; Zhang and Zong, 2016). In our approach, given an input sentence in a given language, we apply the corresponding encoder and the decoder of the other language to translate it to the other language 3. By combining the translation with its original sentence, we get a pseudo-parallel corpus which is utilized to train the model to reconstruct the original sentence from its translation. Local GAN Although the weight sharing constraint is vital for the shared-latent space assumption, it alone does not guarantee that the corresponding sentences in two languages will have the same or similar latent code. To further enforce the shared-latent space, we train a discriminative neural network, referred to as the local discriminator, to classify between the encoding of source sentences and the encoding of target sentences. The local discriminator, implemented as a multilayer perceptron with two hidden layers of size 256, takes the output of the encoder, i.e., Hr calculated as equation 3, as input, and produces a binary prediction about the language of the input sentence. The local discriminator is trained to predict the language by minimizing the following crossentropy loss: LDl(θDl) = −Ex∈xs[log p(f = s|Encs(x))] −Ex∈xt[log p(f = t|Enct(x))] (6) where θDl represents the parameters of the local discriminator and f ∈{s, t}. The encoders are trained to fool the local discriminator: LEncs(θEncs) = −Ex∈xs[log p(f = t|Encs(x))] (7) LEnct(θEnct) = −Ex∈xt[log p(f = s|Enct(x))] (8) where θEncs and θEnct are the parameters of the two encoders. Global GAN We apply the global GANs to fine tune the whole model so that the model is able to generate sentences undistinguishable from the true data, i.e., sentences in the training corpus. Different from the local GANs which updates the parameters of the encoders locally, the global GANs are 3Since the quality of the translation shows little effect on the performance of the model (Sennrich et al., 2015a), we simply use greedy decoding for speed. utilized to update the whole parameters of the proposed model, including the parameters of encoders and decoders. The proposed model has two global GANs: GANg1 and GANg2. In GANg1, the Enct and Decs act as the generator, which generates the sentence ˜xt 4 from xt. The Dg1, implemented based on CNN, assesses whether the generated sentence ˜xt is the true target-language sentence or the generated sentence. The global discriminator aims to distinguish among the true sentences and generated sentences, and it is trained to minimize its classification error rate. During training, the Dg1 feeds back its assessment to finetune the encoder Enct and decoder Decs. Since the machine translation is a sequence generation problem, following (Yang et al., 2017), we leverage policy gradient reinforcement training to back-propagate the assessment. We apply a similar processing to GANg2 (The details about the architecture of the global discriminator and the training procedure of the global GANs can be seen in appendix ?? and ??). There are two stages in the proposed unsupervised training. In the first stage, we train the proposed model with denoising auto-encoding, backtranslation and the local GANs, until no improvement is achieved on the development set. Specifically, we perform one batch of denoising autoencoding for the source and target languages, one batch of back-translation for the two languages, and another batch of local GAN for the two languages. In the second stage, we fine tune the proposed model with the global GANs. 4 Experiments and Results We evaluate the proposed approach on EnglishGerman, English-French and Chinese-to-English translation tasks 5. We firstly describe the datasets, pre-processing and model hyper-parameters we used, then we introduce the baseline systems, and finally we present our experimental results. 4.1 Data Sets and Preprocessing In English-German and English-French translation, we make our experiments comparable with previous work by using the datasets from the 4The ˜xt is ˜xEnct−Decs t in figure 1. We omit the superscript for simplicity. 5The reason that we do not conduct experiments on English-to-Chinese translation is that we do not get public test sets for English-to-Chinese. 51 WMT 2014 and WMT 2016 shared tasks respectively. For Chinese-to-English translation, we use the datasets from LDC, which has been widely utilized by previous works (Tu et al., 2017; Zhang et al., 2017a). WMT14 English-French Similar to (Lample et al., 2017), we use the full training set of 36M sentence pairs and we lower-case them and remove sentences longer than 50 words, resulting in a parallel corpus of about 30M pairs of sentences. To guarantee no exact correspondence between the source and target monolingual sets, we build monolingual corpora by selecting English sentences from 15M random pairs, and selecting the French sentences from the complementary set. Sentences are encoded with byte-pair encoding (Sennrich et al., 2015b), which has an English vocabulary of about 32000 tokens, and French vocabulary of about 33000 tokens. We report results on newstest2014. WMT16 English-German We follow the same procedure mentioned above to create monolingual training corpora for English-German translation, and we get two monolingual training data of 1.8M sentences each. The two languages share a vocabulary of about 32000 tokens. We report results on newstest2016. LDC Chinese-English For Chinese-to-English translation, our training data consists of 1.6M sentence pairs randomly extracted from LDC corpora 6. Since the data set is not big enough, we just build the monolingual data set by randomly shuffling the Chinese and English sentences respectively. In spite of the fact that some correspondence between examples in these two monolingual sets may exist, we never utilize this alignment information in our training procedure (see Section 3.2). Both the Chinese and English sentences are encoded with byte-pair encoding. We get an English vocabulary of about 34000 tokens, and Chinese vocabulary of about 38000 tokens. The results are reported on NIST02. Since the proposed system relies on the pretrained cross-lingual embeddings, we utilize the monolingual corpora described above to train the embeddings for each language independently by using word2vec (Mikolov et al., 2013). We then apply the public implementation 7 of the method proposed by (Artetxe et al., 2017a) to map these 6LDC2002L27, LDC2002T01, LDC2002E18, LDC2003E07, LDC2004T08, LDC2004E12, LDC2005T10 7https://github.com/artetxem/vecmap embeddings to a shared-latent space 8. 4.2 Model Hyper-parameters and Evaluation Following the base model in (Vaswani et al., 2017), we set the dimension of word embedding as 512, dropout rate as 0.1 and the head number as 8. We use beam search with a beam size of 4 and length penalty α = 0.6. The model is implemented in TensorFlow (Abadi et al., 2015) and trained on up to four K80 GPUs synchronously in a multi-GPU setup on a single machine. For model selection, we stop training when the model achieves no improvement for the tenth evaluation on the development set, which is comprised of 3000 source and target sentences extracted randomly from the monolingual training corpora. Following (Lample et al., 2017), we translate the source sentences to the target language, and then translate the resulting sentences back to the source language. The quality of the model is then evaluated by computing the BLEU score over the original inputs and their reconstructions via this two-step translation process. The performance is finally averaged over two directions, i.e., from source to target and from target to source. BLEU (Papineni et al., 2002) is utilized as the evaluation metric. For Chinese-to-English, we apply the script mteval-v11b.pl to evaluate the translation performance. For English-German and English-French, we evaluate the translation performance with the script multi-belu.pl 9. 4.3 Baseline Systems Word-by-word translation (WBW) The first baseline we consider is a system that performs word-by-word translations using the inferred bilingual dictionary. Specifically, it translates a sentence word-by-word, replacing each word with its nearest neighbor in the other language. Lample et al. (2017) The second baseline is a previous work that uses the same training and testing sets with this paper. Their model belongs to the standard attention-based encoder-decoder framework, which implements the encoder using a bidirectional long short term memory network (LSTM) and implements the decoder using a simple forward LSTM. They apply one single encoder and 8The configuration we used to run these open-source toolkits can be found in appendix ?? 9https://github.com/mosessmt/mosesdecoder/blob/617e8c8/scripts/generic/multibleu.perl;mteval-v11b.pl 52 en-de de-en en-fr fr-en zh-en Supervised 24.07 26.99 30.50 30.21 40.02 Word-by-word 5.85 9.34 3.60 6.80 5.09 Lample et al. (2017) 9.64 13.33 15.05 14.31 The proposed approach 10.86 14.62 16.97 15.58 14.52 Table 2: The translation performance on English-German, English-French and Chinese-to-English test sets. The results of (Lample et al., 2017) are copied directly from their paper. We do not present the results of (Artetxe et al., 2017b) since we use different training sets. decoder for the source and target languages. Supervised training We finally consider exactly the same model as ours, but trained using the standard cross-entropy loss on the original parallel sentences. This model can be viewed as an upper bound for the proposed unsupervised model. 4.4 Results and Analysis 4.4.1 Number of weight-sharing layers We firstly investigate how the number of weightsharing layers affects the translation performance. In this experiment, we vary the number of weightsharing layers in the AEs from 0 to 4. Sharing one layer in AEs means sharing one layer for the encoders and in the meanwhile, sharing one layer for the decoders. The BLEU scores of English-to-German, English-to-French and Chinese-to-English translation tasks are reported in figure 2. Each curve corresponds to a different translation task and the x-axis denotes the number of weight-sharing layers for the AEs. We find that the number of weight-sharing layers shows much effect on the translation performance. And the best translation performance is achieved when only one layer is shared in our system. When all of the four layers are shared, i.e., only one shared encoder is utilized, we get poor translation performance in all of the three translation tasks. This verifies our conjecture that the shared encoder is detrimental to the performance of unsupervised NMT especially for the translation tasks on distant language pairs. More concretely, for the related language pair translation, i.e., English-toFrench, the encoder-shared model achieves -0.53 BLEU points decline than the best model where only one layer is shared. For the more distant language pair English-to-German, the encoder-shared model achieves more significant decline, i.e., -0.85 BLEU points decline. And for the most distant language pair Chinese-to-English, the decline is as large as -1.66 BLEU points. We explain this as that the more distant the language pair is, the more different characteristics they have. And the shared encoder is weak in keeping the unique characteristic of each language. Additionally, we also notice that using two completely independent encoders, i.e., setting the number of weight-sharing layers as 0, results in poor translation performance too. This confirms our intuition that the shared layers are vital to map the source and target latent representations to a shared-latent space. In the rest of our experiments, we set the number of weightsharing layer as 1. 0 1 2 3 4 5 8 10 12 14 16 18 BLEU 0.85 0.53 1.66 En2De En2Fr Zh2En Figure 2: The effects of the weight-sharing layer number on English-to-German, English-to-French and Chinese-to-English translation tasks. 4.4.2 Translation results Table 2 shows the BLEU scores on EnglishGerman, English-French and English-to-Chinese test sets. As it can be seen, the proposed approach obtains significant improvements than the word-by-word baseline system, with at least +5.01 BLEU points in English-to-German translation and up to +13.37 BLEU points in English-toFrench translation. This shows that the proposed model only trained with monolingual data effec53 en-de de-en en-fr fr-en zh-en Without weight sharing 10.23 13.84 16.02 14.82 13.75 Without embedding-reinforced encoder 10.45 14.17 16.55 15.27 14.10 Without directional self-attention 10.60 14.21 16.82 15.30 14.29 Without local GANs 10.51 14.35 16.40 15.07 14.12 Without Global GANs 10.34 14.05 16.19 15.21 14.09 Full model 10.86 14.62 16.97 15.58 14.52 Table 3: Ablation study on English-German, English-French and Chinese-to-English translation tasks. Without weight sharing means no layers are shared in the two AEs. tively learns to use the context information and the internal structure of each language. Compared to the work of (Lample et al., 2017), our model also achieves up to +1.92 BLEU points improvement on English-to-French translation task. We believe that the unsupervised NMT is very promising. However, there is still a large room for improvement compared to the supervised upper bound. The gap between the supervised and unsupervised model is as large as 12.3-25.5 BLEU points depending on the language pair and translation direction. 4.4.3 Ablation study To understand the importance of different components of the proposed system, we perform an ablation study by training multiple versions of our model with some missing components: the local GANs, the global GANs, the directional self-attention, the weight-sharing, the embeddingreinforced encoders, etc. Results are reported in table 3. We do not test the the importance of the auto-encoding, back-translation and the pretrained embeddings because they have been widely tested in (Lample et al., 2017; Artetxe et al., 2017b). Table 3 shows that the best performance is obtained with the simultaneous use of all the tested elements. The most critical component is the weight-sharing constraint, which is vital to map sentences of different languages to the sharedlatent space. The embedding-reinforced encoder also brings some improvement on all of the translation tasks. When we remove the directional selfattention, we get up to -0.3 BLEU points decline. This indicates that it deserves more efforts to investigate the temporal order information in selfattention mechanism. The GANs also significantly improve the translation performance of our system. Specifically, the global GANs achieve improvement up to +0.78 BLEU points on Englishto-French translation and the local GANs also obtain improvement up to +0.57 BLEU points on English-to-French translation. This reveals that the proposed model benefits a lot from the crossdomain loss defined by GANs. 5 Conclusion and Future work The models proposed recently for unsupervised NMT use a single encoder to map sentences from different languages to a shared-latent space. We conjecture that the shared encoder is problematic for keeping the unique and inherent characteristic of each language. In this paper, we propose the weight-sharing constraint in unsupervised NMT to address this issue. To enhance the cross-language translation performance, we also propose the embedding-reinforced encoders, local GAN and global GAN into the proposed system. Additionally, the directional self-attention is introduced to model the temporal order information for our system. We test the proposed model on EnglishGerman, English-French and Chinese-to-English translation tasks. The experimental results reveal that our approach achieves significant improvement and verify our conjecture that the shared encoder is really a bottleneck for improving the unsupervised NMT. The ablation study shows that each component of our system achieves some improvement for the final translation performance. Unsupervised NMT opens exciting opportunities for the future research. However, there is still a large room for improvement compared to the supervised NMT. In the future, we would like to investigate how to utilize the monolingual data more effectively, such as incorporating the language model and syntactic information into unsupervised NMT. Besides, we decide to make more efforts to explore how to reinforce the temporal or54 der information for the proposed model. Acknowledgements This work is supported by the National Key Research and Development Program of China under Grant No. 2017YFB1002102, and Beijing Engineering Research Center under Grant No. Z171100002217015. We would like to thank Xu Shuang for her preparing data used in this work. Additionally, we also want to thank Jiaming Xu, Suncong Zheng and Wenfu Wang for their invaluable discussions on this work. References Mart´ın Abadi, Ashish Agarwal, Paul Barham, Eugene Brevdo, Zhifeng Chen, Craig Citro, Greg S. Corrado, Andy Davis, Jeffrey Dean, Matthieu Devin, Sanjay Ghemawat, Ian Goodfellow, Andrew Harp, Geoffrey Irving, Michael Isard, Yangqing Jia, Rafal Jozefowicz, Lukasz Kaiser, Manjunath Kudlur, Josh Levenberg, Dan Man´e, Rajat Monga, Sherry Moore, Derek Murray, Chris Olah, Mike Schuster, Jonathon Shlens, Benoit Steiner, Ilya Sutskever, Kunal Talwar, Paul Tucker, Vincent Vanhoucke, Vijay Vasudevan, Fernanda Vi´egas, Oriol Vinyals, Pete Warden, Martin Wattenberg, Martin Wicke, Yuan Yu, and Xiaoqiang Zheng. 2015. TensorFlow: Large-scale machine learning on heterogeneous systems . Mikel Artetxe, Gorka Labaka, and Eneko Agirre. 2016. Learning principled bilingual mappings of word embeddings while preserving monolingual invariance. In Conference on Empirical Methods in Natural Language Processing. pages 2289–2294. Mikel Artetxe, Gorka Labaka, and Eneko Agirre. 2017a. Learning bilingual word embeddings with (almost) no bilingual data. In Meeting of the Association for Computational Linguistics. pages 451– 462. Mikel Artetxe, Gorka Labaka, Eneko Agirre, and Kyunghyun Cho. 2017b. Unsupervised neural machine translation . Dzmitry Bahdanau, Kyunghyun Cho, and Yoshua Bengio. 2014. Neural machine translation by jointly learning to align and translate. arXiv preprint arXiv:1409.0473 . Yong Cheng, Yang Liu, Qian Yang, Maosong Sun, and Wei Xu. 2016. Neural machine translation with pivot languages. arXiv preprint arXiv:1611.04928 . Yong Cheng, Qian Yang, Yang Liu, Maosong Sun, Wei Xu, Yong Cheng, Qian Yang, Yang Liu, Maosong Sun, and Wei Xu. 2017. Joint training for pivotbased neural machine translation. In Twenty-Sixth International Joint Conference on Artificial Intelligence. pages 3974–3980. Kyunghyun Cho, Bart Van Merri¨enboer, Caglar Gulcehre, Dzmitry Bahdanau, Fethi Bougares, Holger Schwenk, and Yoshua Bengio. 2014. Learning phrase representations using rnn encoder-decoder for statistical machine translation. arXiv preprint arXiv:1406.1078 . Alexis Conneau, Guillaume Lample, Marc’Aurelio Ranzato, Ludovic Denoyer, and Herv Jgou. 2017. Word translation without parallel data . Jonas Gehring, Michael Auli, David Grangier, Denis Yarats, and Yann N Dauphin. 2017. Convolutional sequence to sequence learning . Felix Hill, Kyunghyun Cho, and Anna Korhonen. 2016. Learning distributed representations of sentences from unlabelled data. TACL . Melvin Johnson, Mike Schuster, Quoc V Le, Maxim Krikun, Yonghui Wu, Zhifeng Chen, Nikhil Thorat, Fernanda Vi´egas, Martin Wattenberg, Greg Corrado, et al. 2016. Google’s multilingual neural machine translation system: Enabling zero-shot translation. arXiv preprint arXiv:1611.04558 . Nal Kalchbrenner and Phil Blunsom. 2013. Recurrent continuous translation models. EMNLP pages 1700–1709. Guillaume Lample, Ludovic Denoyer, and Marc’Aurelio Ranzato. 2017. Unsupervised machine translation using monolingual corpora only . Tomas Mikolov, Ilya Sutskever, Kai Chen, Greg S Corrado, and Jeff Dean. 2013. Distributed representations of words and phrases and their compositionality. In Advances in neural information processing systems. pages 3111–3119. Kishore Papineni, Salim Roukos, Todd Ward, and WeiJing Zhu. 2002. Bleu: a method for automatic evaluation of machine translation. Association for Computational Linguistics pages 311–318. Amrita Saha, Mitesh M Khapra, Sarath Chandar, Janarthanan Rajendran, and Kyunghyun Cho. 2016. A correlational encoder decoder architecture for pivot based sequence generation. arXiv preprint arXiv:1606.04754 . Rico Sennrich, Barry Haddow, and Alexandra Birch. 2015a. Improving neural machine translation models with monolingual data. arXiv preprint arXiv:1511.06709 . Rico Sennrich, Barry Haddow, and Alexandra Birch. 2015b. Neural machine translation of rare words with subword units. Computer Science . Shiqi Shen, Yong Cheng, Zhongjun He, Wei He, Hua Wu, Maosong Sun, and Yang Liu. 2015. Minimum risk training for neural machine translation. arXiv preprint arXiv:1512.02433 . 55 Tao Shen, Tianyi Zhou, Guodong Long, Jing Jiang, Shirui Pan, and Chengqi Zhang. 2017. Disan: Directional self-attention network for rnn/cnn-free language understanding . Ilya Sutskever, Oriol Vinyals, and Quoc VV Le. 2014. Sequence to sequence learning with neural networks. Advances in neural information processing systems pages 3104–3112. Zhaopeng Tu, Yang Liu, Shuming Shi, and Tong Zhang. 2017. Learning to remember translation history with a continuous cache . Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, Lukasz Kaiser, and Illia Polosukhin. 2017. Attention is all you need . Pascal Vincent, Hugo Larochelle, Yoshua Bengio, and Pierre-Antoine Manzagol. 2008. Extracting and composing robust features with denoising autoencoders. In Proceedings of the 25th international conference on Machine learning. ACM, pages 1096–1103. Mingxuan Wang, Zhengdong Lu, Jie Zhou, and Qun Liu. 2017. Deep neural machine translation with linear associative unit. arXiv preprint arXiv:1705.00861 . Yonghui Wu, Mike Schuster, Zhifeng Chen, Quoc V Le, Mohammad Norouzi, Wolfgang Macherey, Maxim Krikun, Yuan Cao, Qin Gao, Klaus Macherey, et al. 2016. Google’s neural machine translation system: Bridging the gap between human and machine translation. arXiv preprint arXiv:1609.08144 . Hao Xiong, Zhongjun He, Xiaoguang Hu, and Hua Wu. 2017. Multi-channel encoder for neural machine translation. arXiv preprint arXiv:1712.02109 . Zhen Yang, Wei Chen, Feng Wang, and Bo Xu. 2017. Improving neural machine translation with conditional sequence generative adversarial nets . Jiacheng Zhang, Yang Liu, Huanbo Luan, Jingfang Xu, and Maosong Sun. 2017a. Prior knowledge integration for neural machine translation using posterior regularization. In Meeting of the Association for Computational Linguistics. pages 1514–1523. Jiajun Zhang and Chengqing Zong. 2016. Exploiting source-side monolingual data in neural machine translation. In Conference on Empirical Methods in Natural Language Processing. pages 1535–1545. Meng Zhang, Yang Liu, Huanbo Luan, and Maosong Sun. 2017b. Adversarial training for unsupervised bilingual lexicon induction. In Meeting of the Association for Computational Linguistics. pages 1959– 1970. Jie Zhou, Ying Cao, Xuguang Wang, Peng Li, and Wei Xu. 2016. Deep recurrent models with fast-forward connections for neural machine translation. arXiv preprint arXiv:1606.04199 .
2018
5
Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Long Papers), pages 537–547 Melbourne, Australia, July 15 - 20, 2018. c⃝2018 Association for Computational Linguistics 537 Temporal Event Knowledge Acquisition via Identifying Narratives Wenlin Yao and Ruihong Huang Department of Computer Science and Engineering Texas A&M University {wenlinyao, huangrh}@tamu.edu Abstract Inspired by the double temporality characteristic of narrative texts, we propose a novel approach for acquiring rich temporal “before/after” event knowledge across sentences in narrative stories. The double temporality states that a narrative story often describes a sequence of events following the chronological order and therefore, the temporal order of events matches with their textual order. We explored narratology principles and built a weakly supervised approach that identifies 287k narrative paragraphs from three large text corpora. We then extracted rich temporal event knowledge from these narrative paragraphs. Such event knowledge is shown useful to improve temporal relation classification and outperform several recent neural network models on the narrative cloze task. 1 Introduction Occurrences of events, referring to changes and actions, show regularities. Specifically, certain events often co-occur and in a particular temporal order. For example, people often go to work after graduation with a degree. Such “before/after” temporal event knowledge can be used to recognize temporal relations between events in a document even when their local contexts do not indicate any temporal relations. Temporal event knowledge is also useful to predict an event given several other events in the context. Improving event temporal relation identification and event prediction capabilities can benefit various NLP applications, including event timeline generation, text summarization and question answering. While being in high demand, temporal event Michael Kennedy graduated with a bachelor’s degree from Harvard University in 1980. He married his wife, Victoria, in 1981 and attended law school at the University of Virginia. After receiving his law degree, he briefly worked for a private law firm before joining Citizens Energy Corp. He took over management of the corporation, a non-profit firm that delivered heating fuel to the poor, from his brother Joseph in 1988. Kennedy expanded the organization goals and increased fund raising. Beth paid the taxi driver. She jumped out of the taxi and headed towards the door of her small cottage. She reached into her purse for keys. Beth entered her cottage and got undressed. Beth quickly showered deciding a bath would take too long. She changed into a pair of jeans, a tee shirt, and a sweater. Then, she grabbed her bag and left the cottage. Figure 1: Two narrative examples knowledge is lacking and difficult to obtain. Existing knowledge bases, such as Freebase (Bollacker et al., 2008) or Probase (Wu et al., 2012), often contain rich knowledge about entities, e.g., the birthplace of a person, but contain little event knowledge. Several approaches have been proposed to acquire temporal event knowledge from a text corpus, by either utilizing textual patterns (Chklovski and Pantel, 2004) or building a temporal relation identifier (Yao et al., 2017). However, most of these approaches are limited to identifying temporal relations within one sentence. Inspired by the double temporality characteristic of narrative texts, we propose a novel approach for acquiring rich temporal “before/after” event knowledge across sentences via identifying narrative stories. The double temporality states that a narrative story often describes a sequence of events following the chronological order and therefore, the temporal order of events matches with their textual order (Walsh, 2001; Riedl and Young, 2010; Grabes, 2013). Therefore, we can easily distill temporal event knowledge if we have identified a large collection of 538 narrative texts. Consider the two narrative examples in figure 1, where the top one is from a news article of New York Times and the bottom one is from a novel book. From the top one, we can easily extract one chronologically ordered event sequence {graduated, marry, attend, receive, work, take over, expand, increase}, with all events related to the main character Michael Kennedy. While some parts of the event sequence are specific to this story, the event sequence contains regular event temporal relations, e.g., people often {graduate} first and then get {married}, or {take over} a role first and then {expand} a goal. Similarly, from the bottom one, we can easily extract another event sequence {pay, jump out, head, reach into, enter, undress, shower, change, grab, leave} that contains routine actions when people take a shower and change clothes. There has been recent research on narrative identification from blogs by building a text classifier in a supervised manner (Gordon and Swanson, 2009; Ceran et al., 2012). However, narrative texts are common in other genres as well, including news articles and novel books, where little annotated data is readily available. Therefore, in order to identify narrative texts from rich sources, we develop a weakly supervised method that can quickly adapt and identify narrative texts from different genres, by heavily exploring the principles that are used to characterize narrative structures in narratology studies. It is generally agreed in narratology (Forster, 1962; Mani, 2012; Pentland, 1999; Bal, 2009) that a narrative is a discourse presenting a sequence of events arranged in their time order (the plot) and involving specific characters (the characters). First, we derive specific grammatical and entity co-reference rules to identify narrative paragraphs that each contains a sequence of sentences sharing the same actantial syntax structure (i.e., NP VP describing a character did something) (Greimas, 1971) and mentioning the same character. Then, we train a classifier using the initially identified seed narrative texts and a collection of grammatical, co-reference and linguistic features that capture the two key principles and other textual devices of narratives. Next, the classifier is applied back to identify new narratives from raw texts. The newly identified narratives will be used to augment seed narratives and the bootstrapping learning process iterates until no enough new narratives can be found. Then by leveraging the double temporality characteristic of narrative paragraphs, we distill general temporal event knowledge. Specifically, we extract event pairs as well as longer event sequences consisting of strongly associated events that often appear in a particular textual order in narrative paragraphs, by calculating Causal Potential (Beamer and Girju, 2009; Hu et al., 2013) between events. Specifically, we obtained 19k event pairs and 25k event sequences with three to five events from the 287k narrative paragraphs we identified across three genres, news articles, novel books and blogs. Our evaluation shows that both the automatically identified narrative paragraphs and the extracted event knowledge are of high quality. Furthermore, the learned temporal event knowledge is shown to yield additional performance gains when used for temporal relation identification and the Narrative Cloze task. The acquired event temporal knowledge and the knowledge acquisition system are publicly available1. 2 Related Work Several previous works have focused on acquiring temporal event knowledge from texts. VerbOcean (Chklovski and Pantel, 2004) used predefined lexico-syntactic patterns (e.g., “X and then Y”) to acquire event pairs with the temporal happens before relation from the Web. Yao et al. (2017) simultaneously trained a temporal “before/after” relation classifier and acquired event pairs that are regularly in a temporal relation by exploring the observation that some event pairs tend to show the same temporal relation regardless of contexts. Note that these prior works are limited to identifying temporal relations within individual sentences. In contrast, our approach is designed to acquire temporal relations across sentences in a narrative paragraph. Interestingly, only 195 (1%) out of 19k event pairs acquired by our approach can be found in VerbOcean or regular event pairs learned by the previous two approaches. Our design of the overall event knowledge acquisition also benefits from recent progress on narrative identification. Gordon and Swanson (2009) annotated a small set of paragraphs presenting stories in the ICWSM Spinn3r Blog corpus (Burton et al., 2009) and trained a classifier using bag-ofwords features to identify more stories. (Ceran 1http://nlp.cs.tamu.edu/resources.html 539 et al., 2012) trained a narrative classifier using semantic triplet features on the CSC Islamic Extremist corpus. Our weakly supervised narrative identification method is closely related to Eisenberg and Finlayson (2017), which also explored the two key elements of narratives, the plot and the characters, in designing features with the goal of obtaining a generalizable story detector. But different from this work, our narrative identification method does not require any human annotations and can quickly adapt to new text sources. Temporal event knowledge acquisition is related to script learning (Chambers and Jurafsky, 2008), where a script consists of a sequence of events that are often temporally ordered and represent a typical scenario. However, most of the existing approaches on script learning (Chambers and Jurafsky, 2009; Pichotta and Mooney, 2016; Granroth-Wilding and Clark, 2016) were designed to identify clusters of closely related events, not to learn the temporal order between events though. For example, Chambers and Jurafsky (2008, 2009) learned event scripts by first identifying closely related events that share an argument and then recognizing their partial temporal orders by a separate temporal relation classifier trained on the small labeled dataset TimeBank (Pustejovsky et al., 2003). Using the same method to get training data, Jans et al. (2012); Granroth-Wilding and Clark (2016); Pichotta and Mooney (2016); Wang et al. (2017) applied neural networks to learn event embeddings and predict the following event in a context. Distinguished from the previous script learning works, we focus on acquiring event pairs or longer script-like event sequences with events arranged in a complete temporal order. In addition, recent works (Regneri et al., 2010; Modi et al., 2016) collected script knowledge by directly asking Amazon Mechanical Turk (AMT) to write down typical temporally ordered event sequences in a given scenario (e.g., shopping or cooking). Interestingly, our evaluation shows that our approach can yield temporal event knowledge that covers 48% of human-provided script knowledge. 3 Key Elements of Narratives It is generally agreed in narratology (Forster, 1962; Mani, 2012; Pentland, 1999; Bal, 2009) that a narrative presents a sequence of events arranged in their time order (the plot) and involving specific characters (the characters). Plot. The plot consists of a sequence of closely related events. According to (Bal, 2009), an event in a narrative often describes a “transition from one state to another state, caused or experienced by actors”. Moreover, as Mani (2012) illustrates, a narrative is often “an account of past events in someone’s life or in the development of something”. These prior studies suggest that sentences containing a plot event are likely to have the actantial syntax “NP VP”2 (Greimas, 1971) with the main verb in the past tense. Character. A narrative usually describes events caused or experienced by actors. Therefore, a narrative story often has one or two main characters, called protagonists, who are involved in multiple events and tie events together. The main character can be a person or an organization. Other Textual Devices. A narrative may contain peripheral contents other than events and characters, including time, place, the emotional and psychological states of characters etc., which do not advance the plot but provide essential information to the interpretation of the events (Pentland, 1999). We use rich Linguistic Inquiry and Word Count (LIWC) (Pennebaker et al., 2015) features to capture a variety of textual devices used to describe such contents. 4 Phase One: Weakly Supervised Narrative Identification In order to acquire rich temporal event knowledge, we first develop a weakly supervised approach that can quickly adapt to identify narrative paragraphs from various text sources. 4.1 System Overview The weakly supervised method is designed to capture key elements of narratives in each of two stages. As shown in figure 2, in the first stage, we identify the initial batch of narrative paragraphs that satisfy strict rules and the key principles of narratives. Then in the second stage, we train a statistical classifier using the initially identified seed narrative texts and a collection of soft features for capturing the same key principles and other textual devices of narratives. Next, the classifier is applied to identify new narratives from raw texts again. The newly identified narratives will be used to augment seed narratives and the bootstrapping 2NP is Noun Phrase and VP is Verb Phrase. 540 Figure 2: Overview of the Narrative Learning System learning process iterates until no enough (specifically, less than 2,000) new narratives can be found. Here, in order to specialize the statistical classifier to each genre, we conduct the learning process on news, novels and blogs separately. 4.2 Rules for Identifying Seed Narratives Grammar Rules for Identifying Plot Events. Guided by the prior narratology studies (Greimas, 1971; Mani, 2012) and our observations, we use context-free grammar production rules to identify sentences that describe an event in an actantial syntax structure. Specifically, we use three sets of grammar rules to specify the overall syntactic structure of a sentence. First, we require a sentence to have the basic active voiced structure “S →NP VP” or one of the more complex sentence structures that are derived from the basic structure considering Coordinating Conjunctions (CC), Adverbial Phrase (ADVP) or Prepositional Phrase (PP) attachments3. For example, in the narrative of Figure 1, the sentence “Michael Kennedy earned a bachelor’s degree from Harvard University in 1980.” has the basic sentence structure “S →NP VP”, where the “NP” governs the character mention of ‘Michael Kennedy’ and the “VP” governs the rest of the sentence and describes a plot event. In addition, considering that a narrative is usually “an account of past events in someone’s life or in the development of something” (Mani, 2012; Dictionary, 2007), we require the headword of the VP to be in the past tense. Furthermore, the subject of the sentence is meant to represent a character. Therefore, we specify 12 grammar rules4 to 3We manually identified 14 top-level sentence production rules, for example, “S →NP ADVP VP”, “S →PP , NP VP” and “S →S CC S”. Appendix shows all the rules. 4The example NP rules include “NP →NNP”, “NP → NP CC NP” and “NP →DT NNP”. require the sentence subject noun phrase to have a simple structure and have a proper noun or pronoun as its head word. For seed narratives, we consider paragraphs containing at least four sentences and we require 60% or more sentences to satisfy the sentence structure specified above. We also require a narrative paragraph to contain no more than 20% of sentences that are interrogative, exclamatory or dialogue, which normally do not contain any plot events. The specific parameter settings are mainly determined based on our observations and analysis of narrative samples. The threshold of 60% for “sentences with actantial structure” was set to reflect the observation that sentences in a narrative paragraph usually (over half) have an actantial structure. A small portion (20%) of interrogative, exclamatory or dialogue sentences is allowed to reflect the observation that many paragraphs are overall narratives even though they may contain 1 or 2 such sentences, so that we achieve a good coverage in narrative identification. The Character Rule. A narrative usually has a protagonist character that appears in multiple sentences and ties a sequence of events, therefore, we also specify a rule requiring a narrative paragraph to have a protagonist character. Concretely, inspired by Eisenberg and Finlayson (2017), we applied the named entity recognizer (Finkel et al., 2005) and entity coreference resolver (Lee et al., 2013) from the CoreNLP toolkit (Manning et al., 2014) to identify the longest entity chain in a paragraph that has at least one mention recognized as a Person or Organization, or a gendered pronoun. Then we calculate the normalized length of this entity chain by dividing the number of entity mentions by the number of sentences in the paragraph. We require the normalized length of this longest 541 entity chain to be ≥0.4, meaning that 40% or more sentences in a narrative mention a character 5. 4.3 The Statistical Classifier for Identifying New Narratives Using the seed narrative paragraphs identified in the first stage as positive instances, we train a statistical classifier to continue to identify more narrative paragraphs that may not satisfy the specific rules. We also prepare negative instances to compete with positive narrative paragraphs in training. Negative instances are paragraphs that are not likely to be narratives and do not present a plot or protagonist character, but are similar to seed narratives in others aspects. Specifically, similar to seed narratives, we require a non-narrative paragraph to contain at least four sentences with no more than 20% of sentences being interrogative, exclamatory or dialogue; but in contrast to seed narratives, a non-narrative paragraph should contain 30% of or fewer sentences that have the actantial sentence structure, where the longest character entity chain should not span over 20% of sentences. We randomly sample such non-narrative paragraphs that are five times of narrative paragraphs 6. In addition, since it is infeasible to apply the trained classifier to all the paragraphs in a large text corpus, such as the Gigaword corpus (Graff and Cieri, 2003), we identify candidate narrative paragraphs and only apply the statistical classifier to these candidate paragraphs. Specifically, we require a candidate paragraph to satisfy all the constraints used for identifying seed narrative paragraphs but contain only 30%7 or more sentences with an actantial structure and have the longest character entity chain spanning over 20%8 of or more sentences. We choose Maximum Entropy (Berger et al., 1996) as the classifier. Specifically, we use the MaxEnt model implementation in the LIBLIN540% was chosen to reflect that a narrative paragraph often contains a main character that is commonly mentioned across sentences (half or a bit less than half of all the sentences). 6We used the skewed pos:neg ratio of 1:5 in all bootstrapping iterations to reflect the observation that there are generally many more non-narrative paragraphs than narrative paragraphs in a document. 7This value is half of the corresponding thresshold used for identifying seed narrative paragraphs. 8This value is half of the corresponding thresshold used for identifying seed narrative paragraphs. EAR library9 (Fan et al., 2008) with default parameter settings. Next, we describe the features used to capture the key elements of narratives. Features for Identifying Plot Events: Realizing that grammar production rules are effective in identifying sentences that contain a plot event, we encode all the production rules as features in the statistical classifier. Specifically, for each narrative paragraph, we use the frequency of all syntactic production rules as features. Note that the bottom level syntactic production rules have the form of POS tag →WORD and contain a lexical word, which made these rules dependent on specific contexts of a paragraph. Therefore, we exclude these bottom level production rules from the feature set in order to model generalizable narrative elements rather than specific contents of a paragraph. In addition, to capture potential event sequence overlaps between new narratives and the already learned narratives, we build a verb bigram language model using verb sequences extracted from the learned narrative paragraphs and calculate the perplexity score (as a feature) of the verb sequence in a candidate narrative paragraph. Specifically, we calculate the perplexity score of an event sequence that is normalized by the number of events, PP(e1, ..., eN) = NqQN i=1 1 P(ei|ei−1), where N is the total number of events in a sequence and ei is a event word. We approximate P(ei|ei−1) = C(ei−1,ei) C(ei−1) , where C(ei−1) is the number of occurrences of ei−1 and C(ei−1, ei) is the number of co-occurrences of ei−1 and ei. C(ei−1, ei) and C(ei−1) are calculated based on all event sequences from known narrative paragraphs. Features for the Protagonist Characters: We consider the longest three coreferent entity chains in a paragraph that have at least one mention recognized as a Person or Organization, or a gendered pronoun. Similar to the seed narrative identification stage, we obtain the normalized length of each entity chain by dividing the number of entity mentions with the number of sentences in the paragraph. In addition, we also observe that a protagonist character appears frequently in the surrounding paragraphs as well, therefore, we calculate the normalized length of each entity chain based on its presences in the target paragraph as well as one preceding paragraph and one follow9https://www.csie.ntu.edu.tw/˜cjlin/ liblinear/ 542 0 (Seeds) 1 2 3 4 Total News 20k 40k 12k 5k 1k 78k Novels 75k 82k 24k 6k 2k 189k Blogs 6k 10k 3k 1k 20k Sum 101k 132k 39k 12k 3k 287k Table 1: Number of new narratives generated after each bootstrapping iteration ing paragraph. We use 6 normalized lengths (3 from the target paragraph 10 and 3 from surrounding paragraphs) as features. Other Writing Style Features: We create a feature for each semantic category in the Linguistic Inquiry and Word Count (LIWC) dictionary (Pennebaker et al., 2015), and the feature value is the total number of occurrences of all words in that category. These LIWC features capture presences of certain types of words, such as words denoting relativity (e.g., motion, time, space) and words referring to psychological processes (e.g., emotion and cognitive). In addition, we encode Parts-ofSpeech (POS) tag frequencies as features as well which have been shown effective in identifying text genres and writing styles. 4.4 Identifying Narrative Paragraphs from Three Text Corpora Our weakly supervised system is based on the principles shared across all narratives, so it can be applied to different text sources for identifying narratives. We considered three types of texts: (1) News Articles. News articles contain narrative paragraphs to describe the background of an important figure or to provide details for a significant event. We use English Gigaword 5th edition (Graff and Cieri, 2003; Napoles et al., 2012), which contains 10 million news articles. (2) Novel Books. Novels contain rich narratives to describe actions by characters. BookCorpus (Zhu et al., 2015) is a large collection of free novel books written by unpublished authors, which contains 11,038 books of 16 different sub-genres (e.g., Romance, Historical, Adventure, etc.). (3) Blogs. Vast publicly accessible blogs also contain narratives because “personal life and experiences” is a primary topic of blog posts (Lenhart, 2006). We use the Blog Authorship Corpus (Schler et al., 2006) collected from the blogger.com website, which consists of 680k posts written by thousands of authors. We applied 10Specifically, the lengths of the longest, second longest and third longest entity chains. the Stanford CoreNLP tools (Manning et al., 2014) to the three text corpora to obtain POS tags, parse trees, named entities, coreference chains, etc. In order to combat semantic drifts (McIntosh and Curran, 2009) in bootstrapping learning, we set the initial selection confidence score produced by the statistical classifier at 0.5 and increase it by 0.05 after each iteration. The bootstrapping system runs for four iterations and learns 287k narrative paragraphs in total. Table 1 shows the number of narratives that were obtained in the seeding stage and in each bootstrapping iteration from each text corpus. 5 Phase Two: Extract Event Temporal Knowledge from Narratives Narratives we obtained from the first phase may describe specific stories and contain uncommon events or event transitions. Therefore, we apply Pointwise Mutual Information (PMI) based statistical metrics to measure strengths of event temporal relations in order to identify general knowledge that is not specific to any particular story. Our goal is to learn event pairs and longer event chains with events completely ordered in the temporal “before/after” relation. First, by leveraging the double temporality characteristic of narratives, we only consider event pairs and longer event chains with 3-5 events that have occurred as a segment in at least one event sequence extracted from a narrative paragraph. Specifically, we extract the event sequence (the plot) from a narrative paragraph by finding the main event in each sentence and chaining the main events11 according to their textual order. Then we rank candidate event pairs based on two factors, how strongly associated two events are and how common they appear in a particular temporal order. We adopt the existing metric, Causal Potential (CP), which has been applied to acquire causally related events (Beamer and Girju, 2009) and exactly measures the two aspects. Specifically, the CP score of an event pair is calculated using the following equation: cp(ei, ej) = pmi(ei, ej) + log P(ei →ej) P(ej →ei) (1) where, the first part refers to the Pointwise Mutual Information (PMI) between two events and the 11We only consider main events that are in base verb forms or in the past tense, by requiring their POS tags to be VB, VBP, VBZ or VBD. 543 second part measures the relative ordering or two events. P(ei →ej) refers to the probability that ei occurs before ej in a text, which is proportional to the raw frequency of the pair. PMI measures the association strength of two events, formally, pmi(ei, ej) = log P(ei,ej) P(ei)P(ej), P(ei) = C(ei) P x C(ex) and P(ei, ej) = C(ei,ej) P x P y C(ex,ey), where, x and y refer to all the events in a corpus, C(ei) is the number of occurrences of ei, C(ei, ej) is the number of co-occurrences of ei and ej. While each candidate pair of events should have appeared consecutively as a segment in at least one narrative paragraph, when calculating the CP score, we consider event co-occurrences even when two events are not consecutive in a narrative paragraph but have one or two other events in between. Specifically, the same as in (Hu and Walker, 2017), we calculate separate CP scores based on event co-occurrences with zero (consecutive), one or two events in between, and use the weighted average CP score for ranking an event pair, formally, CP(ei, ej) = P3 d=1 cpd(ei,ej) d . Then we rank longer event sequences based on CP scores for individual event pairs that are included in an event sequence. However, an event sequence of length n is more than n −1 event pairs with any two consecutive events as a pair. We prefer event sequences that are coherent overall, where the events that are one or two events away are highly related as well. Therefore, we define the following metric to measure the quality of an event sequence: CP(e1, e2, · · · , en) = P3 d=1 Pn−d j=1 CP (ej,ej+d) d n −1 . (2) 6 Evaluation 6.1 Precision of Narrative Paragraphs From all the learned narrative paragraphs, we randomly selected 150 texts, with 25 texts selected from narratives learned in each of the two stages (i.e., seed narratives and bootstrapped narratives) using each of the three text corpora (i.e., news, novels, and blogs). Following the same definition “A story is a narrative of events arranged in their time sequence” (Forster, 1962; Gordon and Swanson, 2009), two human adjudicators were asked to judge whether each text is a narrative or a nonnarrative. In order to obtain high inter-agreements, before the official annotations, we trained the two annotators for several iterations. Note that the Narratives Seed Bootstrapped News 0.84 0.72 Novel 0.88 0.92 Blogs 0.92 0.88 AVG 0.88 0.84 Table 2: Precision of narratives based on human annotation pairs graduate →teach (5.7), meet →marry (5.3) pick up →carry (6.3), park →get out (7.3) turn around →face (6.5), dial →ring (6.3) chains drive →park →get out (7.8) toss →fly →land (5.9) grow up →attend →graduate →marry (6.9) contact →call →invite →accept (4.2) knock →open →reach →pull out →hold (6.0) Table 3: Examples of event pairs and chains (with CP scores). →represents before relation. texts we used in training annotators are different from the final texts we used for evaluation purposes. The overall kappa inter-agreement between the two annotators is 0.77. Table 2 shows the precision of narratives learned in the two stages using the three corpora. We determined that a text is a correct narrative if both annotators labeled it as a narrative. We can see that on average, the rule-based classifier achieves the precision of 88% on initializing seed narratives and the statistical classifier achieves the precision of 84% on bootstrapping new ones. Using narratology based features enables the statistical classifier to extensively learn new narrative, and meanwhile maintain a high precision. 6.2 Precision of Event Pairs and Chains To evaluate the quality of the extracted event pairs and chains, we randomly sampled 20 pairs (2%) from every 1,000 event pairs up to the top 18,929 pairs with CP score ≥2.0 (380 pairs selected in total), and 10 chains (1%) from every 1,000 up to the top 25,000 event chains12 (250 chains selected in total). The average CP scores for all event pairs and all event chains we considered are 2.9 and 5.1 respectively. Two human adjudicators were asked to judge whether or not events are likely to occur in the temporal order shown. For event chains, we have one additional criterion requiring that events form a coherent sequence overall. An 12It turns out that many event chains have a high CP score close to 5.0, so we decided not to use a cut-off CP score of event chains but simply chose to evaluate the top 25,000 event chains. 544 Figure 3: Top-ranked event pairs evaluation # of top chains 5k 10k 15k 20k 25k Precision 0.76 0.8 0.75 0.73 0.69 Table 4: Precision of top-ranked event chains event pair/chain is deemed correct if both annotators labeled it as correct. The two annotators achieved kappa inter-agreement scores of 0.71 and 0.66, on annotating event pairs and event chains respectively. As we know, coverage on acquired knowledge is often hard to evaluate because we do not have a complete knowledge base to compare to. Thus, we propose a pseudo recall metric to evaluate the coverage of event knowledge we acquired. Regneri et al. (2010) collected Event Sequence Descriptions (ESDs) of several types of human activities (e.g., baking a cake, going to the theater, etc.) using crowdsourcing. Our first pseudo recall score is calculated based on how many consecutive event pairs in human-written scripts can be found in our top-ranked event pairs. Figure 3 illustrates the precision of top-ranked pairs based on human annotation and the pseudo recall score based on ESDs. We can see that about 75% of the top 19k event pairs are correct, which captures 48% of human-written script knowledge in ESDs. In addition, table 4 shows the precision of topranked event chains with 3 to 5 events. Among the top 25k event chains, about 70% are correctly ordered with the temporal “after” relation. Table 3 shows several examples of event pairs and chains. 6.3 Improving Temporal Relation Classification by Incorporating Event Knowledge To find out whether the learned temporal event knowledge can help with improving temporal reModels Acc.(%) Choubey and Huang (2017) 51.2 + CP score 52.3 Table 5: Results on TimeBank corpus Method Acc.(%) (Chambers and Jurafsky, 2008) 30.92 (Granroth-Wilding and Clark, 2016) 43.28 (Pichotta and Mooney, 2016) 43.17 (Wang et al., 2017) 46.67 Our Results 48.83 Table 6: Results on MCNC task lation classification performance, we conducted experiments on a benchmark dataset - TimeBank corpus v1.2, which contains 2308 event pairs that are annotated with 14 temporal relations 13. To facilitate direct comparisons, we used the same state-of-the-art temporal relation classification system as described in our previous work Choubey and Huang (2017) and considered all the 14 relations in classification. Choubey and Huang (2017) forms three sequences (i.e., word forms, POS tags, and dependency relations) of context words that align with the dependency path between two event mentions and uses three bidirectional LSTMs to get the embedding of each sequence. The final fully connected layer maps the concatenated embeddings of all sequences to 14 fine-grained temporal relations. We applied the same model here, but if an event pair appears in our learned list of event pairs, we concatenated the CP score of the event pair as additional evidence in the final layer. To be consistent with Choubey and Huang (2017), we used the same train/test splitting, the same parameters for the neural network and only considered intra-sentence event pairs. Table 5 shows that by incorporating our learned event knowledge, the overall prediction accuracy was improved by 1.1%. Not surprisingly, out of the 14 temporal relations, the performance on the relation before was improved the most by 4.9%. 6.4 Narrative Cloze Multiple Choice version of the Narrative Cloze task (MCNC) proposed by Granroth-Wilding and Clark (2016); Wang et al. (2017), aims to eval13Specifically, the 14 relations are simultaneous, before, after, ibefore, iafter, begins, begun by, ends, ended by, includes, is included, during, during inv, identity 545 uate understanding of a script by predicting the next event given several context events. Presenting a chain of contextual events e1, e2, ..., en−1, the task is to select the next event from five event candidates, one of which is correct and the others are randomly sampled elsewhere in the corpus. Following the same settings of Wang et al. (2017) and Granroth-Wilding and Clark (2016), we adapted the dataset (test set) of Chambers and Jurafsky (2008) to the multiple choice setting. The dataset contains 69 documents and 349 multiple choice questions. We calculated a PMI score between a candidate event and each context event e1, e2, ..., en−1 based on event sequences extracted from our learned 287k narratives and we chose the event that have the highest sum score of all individual PMI scores. Since the prediction accuracy on 349 multiple choice questions depends on the random initialization of four negative candidate events, we ran the experiment 10 times and took the average accuracy as the final performance. Table 6 shows the comparisons of our results with the performance of several previous models, which were all trained with 1,500k event chains extracted from the NYT portion of the Gigaword corpus (Graff and Cieri, 2003). Each event chain consists of a sequence of verbs sharing an actor within a news article. Except Chambers and Jurafsky (2008), other recent models utilized more and more sophisticated neural language models. Granroth-Wilding and Clark (2016) proposed a two layer neural network model that learns embeddings of event predicates and their arguments for predicting the next event. Pichotta and Mooney (2016) introduced a LSTM-based language model for event prediction. Wang et al. (2017) used dynamic memory as attention in LSTM for prediction. It is encouraging that by using event knowledge extracted from automatically identified narratives, we achieved the best event prediction performance, which is 2.2% higher than the best neural network model. 7 Conclusions This paper presents a novel approach for leveraging the double temporality characteristic of narrative texts and acquiring temporal event knowledge across sentences in narrative paragraphs. We developed a weakly supervised system that explores narratology principles and identifies narrative texts from three text corpora of distinct genres. The temporal event knowledge distilled from narrative texts were shown useful to improve temporal relation classification and outperform several neural language models on the narrative cloze task. For the future work, we plan to expand event temporal knowledge acquisition by dealing with event sense disambiguation and event synonym identification (e.g., drag, pull and haul). 8 Acknowledgments We thank our anonymous reviewers for providing insightful review comments. References Mieke Bal. 2009. Narratology: Introduction to the theory of narrative. University of Toronto Press. Brandon Beamer and Roxana Girju. 2009. Using a bigram event model to predict causal potential. In CICLing. Springer, pages 430–441. Adam L Berger, Vincent J Della Pietra, and Stephen A Della Pietra. 1996. A maximum entropy approach to natural language processing. Computational linguistics 22(1):39–71. Kurt Bollacker, Colin Evans, Praveen Paritosh, Tim Sturge, and Jamie Taylor. 2008. Freebase: a collaboratively created graph database for structuring human knowledge. In Proceedings of the 2008 ACM SIGMOD international conference on Management of data. AcM, pages 1247–1250. Kevin Burton, Akshay Java, and Ian Soboroff. 2009. The icwsm 2009 spinn3r dataset. In Third Annual Conference on Weblogs and Social Media (ICWSM 2009). AAAI. Betul Ceran, Ravi Karad, Steven Corman, and Hasan Davulcu. 2012. A hybrid model and memory based story classifier. In Proceedings of the 3rd Workshop on Computational Models of Narrative. pages 58– 62. Nathanael Chambers and Dan Jurafsky. 2009. Unsupervised learning of narrative schemas and their participants. In Proceedings of the Joint Conference of the 47th Annual Meeting of the ACL and the 4th International Joint Conference on Natural Language Processing of the AFNLP: Volume 2-Volume 2. Association for Computational Linguistics, pages 602– 610. Nathanael Chambers and Daniel Jurafsky. 2008. Unsupervised learning of narrative event chains. In ACL. volume 94305, pages 789–797. Timothy Chklovski and Patrick Pantel. 2004. Verbocean: Mining the web for fine-grained semantic verb 546 relations. In Proceedings of the 2004 Conference on Empirical Methods in Natural Language Processing. Prafulla Kumar Choubey and Ruihong Huang. 2017. A sequential model for classifying temporal relations between intra-sentence events. In Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing. pages 1796–1802. Oxford English Dictionary. 2007. Oxford english dictionary online. Joshua Eisenberg and Mark Finlayson. 2017. A simpler and more generalizable story detector using verb and character features. In Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing. pages 2698–2705. Rong-En Fan, Kai-Wei Chang, Cho-Jui Hsieh, XiangRui Wang, and Chih-Jen Lin. 2008. Liblinear: A library for large linear classification. Journal of machine learning research 9(Aug):1871–1874. Jenny Rose Finkel, Trond Grenager, and Christopher Manning. 2005. Incorporating non-local information into information extraction systems by gibbs sampling. In Proceedings of the 43rd annual meeting on association for computational linguistics. Association for Computational Linguistics, pages 363– 370. Edward Morgan Forster. 1962. Aspects of the novel. 1927. Ed. Oliver Stallybrass . Andrew Gordon and Reid Swanson. 2009. Identifying personal stories in millions of weblog entries. In Third International Conference on Weblogs and Social Media, Data Challenge Workshop, San Jose, CA. volume 46. Hebert Grabes. 2013. Sequentiality. Handbook of Narratology 2:765–76. David Graff and C Cieri. 2003. English gigaword corpus. Linguistic Data Consortium . Mark Granroth-Wilding and Stephen Clark. 2016. What happens next? event prediction using a compositional neural network model. In AAAI. pages 2727–2733. Algirdas Julien Greimas. 1971. Narrative grammar: Units and levels. MLN 86(6):793–806. Zhichao Hu, Elahe Rahimtoroghi, Larissa Munishkina, Reid Swanson, and Marilyn A Walker. 2013. Unsupervised induction of contingent event pairs from film scenes. In Proceedings of the 2013 Conference on Empirical Methods in Natural Language Processing. pages 369–379. Zhichao Hu and Marilyn Walker. 2017. Inferring narrative causality between event pairs in films. In Proceedings of the 18th Annual SIGdial Meeting on Discourse and Dialogue. pages 342–351. Bram Jans, Steven Bethard, Ivan Vuli´c, and Marie Francine Moens. 2012. Skip n-grams and ranking functions for predicting script events. In Proceedings of the 13th Conference of the European Chapter of the Association for Computational Linguistics. Association for Computational Linguistics, pages 336–344. Heeyoung Lee, Angel Chang, Yves Peirsman, Nathanael Chambers, Mihai Surdeanu, and Dan Jurafsky. 2013. Deterministic coreference resolution based on entity-centric, precision-ranked rules. Computational Linguistics 39(4):885–916. Amanda Lenhart. 2006. Bloggers: A portrait of the internet’s new storytellers. Pew Internet & American Life Project. Inderjeet Mani. 2012. Computational modeling of narrative. Synthesis Lectures on Human Language Technologies 5(3):1–142. Christopher D Manning, Mihai Surdeanu, John Bauer, Jenny Rose Finkel, Steven Bethard, and David McClosky. 2014. The stanford corenlp natural language processing toolkit. In ACL (System Demonstrations). pages 55–60. Tara McIntosh and James R Curran. 2009. Reducing semantic drift with bagging and distributional similarity. In Proceedings of the Joint Conference of the 47th Annual Meeting of the ACL and the 4th International Joint Conference on Natural Language Processing of the AFNLP: Volume 1-Volume 1. Association for Computational Linguistics, pages 396– 404. Ashutosh Modi, Tatjana Anikina, Simon Ostermann, and Manfred Pinkal. 2016. Inscript: Narrative texts annotated with script information. In LREC. pages 3485–3493. Courtney Napoles, Matthew Gormley, and Benjamin Van Durme. 2012. Annotated gigaword. In Proceedings of the Joint Workshop on Automatic Knowledge Base Construction and Web-scale Knowledge Extraction. Association for Computational Linguistics, pages 95–100. James W Pennebaker, Ryan L Boyd, Kayla Jordan, and Kate Blackburn. 2015. The development and psychometric properties of liwc2015. Technical report. Brian T Pentland. 1999. Building process theory with narrative: From description to explanation. Academy of management Review 24(4):711–724. Karl Pichotta and Raymond J Mooney. 2016. Learning statistical scripts with lstm recurrent neural networks. In AAAI. pages 2800–2806. James Pustejovsky, Patrick Hanks, Roser Sauri, Andrew See, Robert Gaizauskas, Andrea Setzer, Dragomir Radev, Beth Sundheim, David Day, Lisa Ferro, et al. 2003. The timebank corpus. In Corpus linguistics. Lancaster, UK., volume 2003, page 40. 547 Michaela Regneri, Alexander Koller, and Manfred Pinkal. 2010. Learning script knowledge with web experiments. In Proceedings of the 48th Annual Meeting of the Association for Computational Linguistics. Association for Computational Linguistics, pages 979–988. Mark O Riedl and Robert Michael Young. 2010. Narrative planning: Balancing plot and character. Journal of Artificial Intelligence Research 39:217–268. Jonathan Schler, Moshe Koppel, Shlomo Argamon, and James W Pennebaker. 2006. Effects of age and gender on blogging. In AAAI spring symposium: Computational approaches to analyzing weblogs. volume 6, pages 199–205. Richard Walsh. 2001. Fabula and fictionality in narrative theory. Style 35(4):592–606. Zhongqing Wang, Yue Zhang, and Ching-Yun Chang. 2017. Integrating order information and event relation for script event prediction. In Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing. pages 57–67. Wentao Wu, Hongsong Li, Haixun Wang, and Kenny Q Zhu. 2012. Probase: A probabilistic taxonomy for text understanding. In Proceedings of the 2012 ACM SIGMOD International Conference on Management of Data. ACM, pages 481–492. Wenlin Yao, Saipravallika Nettyam, and Ruihong Huang. 2017. A weakly supervised approach to train temporal relation classifiers and acquire regular event pairs simultaneously. In Proceedings of the 2017 Conference on Recent Advances in Natural Language Processing. pages 803–812. Yukun Zhu, Ryan Kiros, Rich Zemel, Ruslan Salakhutdinov, Raquel Urtasun, Antonio Torralba, and Sanja Fidler. 2015. Aligning books and movies: Towards story-like visual explanations by watching movies and reading books. In Proceedings of the IEEE international conference on computer vision. pages 19–27. A Appendix Here is the full list of grammar rules for identifying plot events in the seeding stage (Section 4.2). Sentence rules (14): S →S CC S S →S PRN CC S S →NP VP S →NP ADVP VP S →NP VP ADVP S →CC NP VP S →PP NP VP S →NP PP VP S →PP NP ADVP VP S →ADVP S NP VP S →ADVP NP VP S →SBAR NP VP S →SBAR ADVP NP VP S →CC ADVP NP VP Noun Phrase rules (12): NP →PRP NP →NNP NP →NNS NP →NNP NNP NP →NNP CC NNP NP →NP CC NP NP →DT NN NP →DT NNS NP →DT NNP NP →DT NNPS NP →NP NNP NP →NP NNP NNP
2018
50
Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Long Papers), pages 548–557 Melbourne, Australia, July 15 - 20, 2018. c⃝2018 Association for Computational Linguistics 548 Text Deconvolution Saliency (TDS): a deep tool box for linguistic analysis Laurent Vanni1∗ Melanie Ducoffe2∗ Damon Mayaffre1 Frederic Precioso2 Dominique Longr´ee3 Veeresh Elango2 Nazly Santos2 Juan Gonzalez2 Luis Galdo2 Carlos Aguilar1 1Universit´e Cˆote d’Azur, CNRS, BCL, France 2Universit´e Cˆote d’Azur, CNRS, I3S, France 3Univ. Li`ege, L.A.S.L.A, Belgium {laurent.vanni, melanie.ducoffe}@unice.fr Abstract In this paper, we propose a new strategy, called Text Deconvolution Saliency (TDS), to visualize linguistic information detected by a CNN for text classification. We extend Deconvolution Networks to text in order to present a new perspective on text analysis to the linguistic community. We empirically demonstrated the efficiency of our Text Deconvolution Saliency on corpora from three different languages: English, French, and Latin. For every tested dataset, our Text Deconvolution Saliency automatically encodes complex linguistic patterns based on co-occurrences and possibly on grammatical and syntax analysis. 1 Introduction As in many other fields of data analysis, Natural Language Processing (NLP) has been strongly impacted by the recent advances in Machine Learning, more particularly with the emergence of Deep Learning techniques. These techniques outperform all other state-of-the-art approaches on a wide range of NLP tasks and so they have been quickly and intensively used in industrial systems. Such systems rely on end-to-end training on large amounts of data, making no prior assumptions about linguistic structure and focusing on stastically frequent patterns. Thus, they somehow step away from computational linguistics as they learn implicit linguistic information automatically without aiming at explaining or even exhibiting classic linguistic structures underlying the decision. This is the question we raise in this article and that we intend to address by exhibiting classic lin∗L. Vanni and M. Ducoffe contributed equally to this work and should be considered as co-first authors. guistic patterns which are indeed exploited implictly in deep architectures to lead to higher performances. Do neural networks make use of cooccurrences and other standard features, considered in traditional Textual Data Analysis (TDA) (Textual Mining)? Do they also rely on complementary linguistic structure which is invisible to traditional techniques? If so, projecting neural networks features back onto the input space would highlight new linguistic structures and would lead to improving the analysis of a corpus and a better understanding on where the power of the Deep Learning techniques comes from. Our hypothesis is that Deep Learning is sensitive to the linguistic units on which the computation of the key statistical sentences is based as well as to phenomena other than frequency and complex linguistic observables. The TDA has more difficulty taking such elements into account – such as linguistic linguistic patterns. Our contribution confronts Textual Data Analysis and Convolutional Neural Networks for text analysis. We take advantage of deconvolution networks for image analysis in order to present a new perspective on text analysis to the linguistic community that we call Text Deconvolution Saliency (TDS). Our deconvolution saliency corresponds to the sum over the word embedding of the deconvolution projection of a given feature map. Such a score provides a heat-map of words in a sentence that highlights the pattern relevant for the classification decision. We examine z-test (see section 4.2) and TDS for three languages: English, French and Latin. For all our datasets, TDS highlights new linguistic observables, invisible with z-test alone. 2 Related work Convolutional Neural Networks (CNNs) are widely used in the computer vision community for 549 a wide panel of tasks: ranging from image classification, object detection to semantic segmentation. It is a bottom-up approach where we applied an input image, stacked layers of convolutions, nonlinearities and sub-sampling. Encouraged by the success for vision tasks, researchers applied CNNs to text-related problems Kalchbrenner et al. (2014); Kim (2014). The use of CNNs for sentence modeling traces back to Collobert and Weston (2008). Collobert adapted CNNs for various NLP problems including Part-of-Speech tagging, chunking, Named Entity Recognition and semantic labeling. CNNs for NLP work as an analogy between an image and a text representation. Indeed each word is embedded in a vector representation, then several words build a matrix (concatenation of the vectors). We first discuss our choice of architectures. If Recurrent Neural Networks (mostly GRU and LSTM) are known to perform well on a broad range of tasks for text, recent comparisons have confirmed the advantage of CNNs over RNNs when the task at hand is essentially a keyphrase recognition task Yin et al. (2017). In Textual Mining, we aim at highlighting linguistics patterns in order to analyze their constrast: specificities and similarities in a corpus Feldman, R., and J. Sanger (2007); L. Lebart, A. Salem and L. Berry (1998). It mostly relies on frequential based methods such as z-test. However, such existing methods have so far encountered difficulties in underlining more challenging linguistic knowledge, which up to now have not been empirically observed as for instance syntactical motifs Mellet and Longr´ee (2009). In that context, supervised classification, especially CNNs, may be exploited for corpus analysis. Indeed, CNN learns automatically parameters to cluster similar instances and drive away instances from different categories. Eventually, their prediction relies on features which inferred specificities and similarities in a corpus. Projecting such features in the word embedding will reveal relevant spots and may automatize the discovery of new linguistic structures as in the previously cited syntactical motifs. Moreover, CNNs hold other advantages for linguistic analysis. They are static architectures that, according to specific settings are more robust to the vanishing gradient problem, and thus can also model long-term dependency in a sentence Dauphin et al. (2017); Wen et al. (2017); Adel and Sch¨utze (2017). Such a property may help to detect structures relying on different parts of a sentence. All previous works converged to a shared assessment: both CNNs and RNNs provide relevant, but different kinds of information for text classification. However, though several works have studied linguistic structures inherent in RNNs, to our knowledge, none of them have focused on CNNs. A first line of research has extensively studied the interpretability of word embeddings and their semantic representations Ji and Eisenstein (2014). When it comes to deep architectures, Karpathy et al. Karpathy et al. (2015) used LSTMs on character level language as a testbed. They demonstrate the existence of long-range dependencies on real word data. Their analysis is based on gate activation statistics and is thus global. On another side, Li et al. Li et al. (2015) provided new visualization tools for recurrent models. They use decoders, tSNE and first derivative saliency, in order to shed light on how neural models work. Our perspective differs from their line of research, as we do not intend to explain how CNNs work on textual data, but rather use their features to provide complementary information for linguistic analysis. Although the usage of RNNs is more common, there are various visualization tools for CNNs analysis, inspired by the computer vision field. Such works may help us to highlight the linguistic features learned by a CNN. Consequently, our method takes inspiration from those works. Visualization models in computer vision mainly consist in inverting hidden layers in order to spot active regions or features that are relevant to the classification decision. One can either train a decoder network or use backpropagation on the input instance to highlight its most relevant features. While those methods may hold accurate information in their input recovery, they have two main drawbacks: (i) they are computationally expensive: the first method requires training a model for each latent representation, and the second relies on backpropagation for each submitted sentence; (ii) they are highly hyperparameter dependent and may require some finetuning depending on the task at hand. On the other hand, Deconvolution Networks, proposed by Zeiler et al Zeiler and Fergus (2014), provide an off-the-shelf method to project a feature map in the input space. It consists in inverting each convolutional layer iteratively, 550 back to the input space. The inverse of a discrete convolution is computationally challenging. In response, a coarse approximation may be employed which consists of inverting channels and filter weights in a convolutional layer and then transposing their kernel matrix. More details of the deconvolution heuristic are provided in section 3. Deconvolution has several advantages. First, it induces minimal computational requirements compared to previous visualization methods. Also, it has been used with success for semantic segmentation on images: in Noh et al. (2015); Noh et al demonstrate the efficiency of deconvolution networks to predict segmentation masks to identify pixel-wise class labels. Thus deconvolution is able to localize meaningful structure in the input space. 3 Model 3.1 CNN for Text Classification We propose a deep neural model to capture linguistics patterns in text. This model is based on Convolutional Neural Networks with an embedding layer for word representations, one convolutional with pooling layer and non-linearities. Finally we have two fully-connected layers. The final output size corresponds to the number of classes. The model is trained by cross-entropy with an Adam optimizer. Figure 1 shows the global structure of our architecture. The input is a sequence of words w1, w2...wn and the output contains class probabilities (for text classification). The embedding is built on top of a Word2Vec architecture, here we consider a Skip-gram model. This embedding is also finetuned by the model to to increase the accuracy. Notice that we do not use lemmatisation, as in Collobert and Weston (2008), thus the linguistic material which is automatically detected does not rely on any prior assumptions about the part of speech. In computer vision, we consider images as 2-dimensional isotropic signals. A text representation may also be considered as a matrix: each word is embedded in a feature vector and their concatenation builds a matrix. However, we cannot assume both dimensions the sequence of words and their embedding representation are isotropic. Thus the filters of CNNs for text typically differ from their counterparts designed for images. Consequently in text, the width of the filter is usually equal to the dimension of the embedding, as illustrated with the red, yellow, blue and green filters in figure 1 Using CNNs has another advantage in our context: due to the convolution operators involved, they can be easily parallelized and may also be easily used by the CPU, which is a practical solution for avoiding the use of GPUs at test time. Figure 1: CNN for Text Classification 3.2 Deconvolution Extending Deconvolution Networks for text is not straightforward. Usually, in computer vision, the deconvolution is represented by a convolution whose weights depends on the filters of the CNN: we invert the weights of the channels and the filters and then transpose each kernel matrix. When considering deconvolution for text, transposing the kernel matrices is not realistic since we are dealing with nonisotropic dimensions - the word sequences and the filter dimension. Eventually, the kernel matrix is not transposed. Another drawback concerns the dimension of the feature map. Here feature map means the output of the convolution before applying max pooling. Its shape is actually the tuple (# words, # filters). Because the filters’ width (red, yellow, blue and green in fig 1) matches the embedding dimension, the feature maps cannot contain this information. To project the feature map in the embedding space, we need to convolve our feature map with the kernel matrices. To this aim, we upsample the feature map to obtain a 3-dimensional sample of size (# words, embedding dimension, # filters). To analyze the relevance of a word in a sentence, we only keep one value per word which corresponds to the sum along the embedding axis of the output of the deconvolution. We call this sum Text Deconvolution Saliency (TDS). For the sake of consistency, we sum up our method in figure 2 551 Figure 2: Textual Deconvolution Saliency (TDS) Eventually, every word in a sentence has a unique TDS score whose value is related to the others. In the next section, we analyze the relevance of TDS. We thoroughly demonstrate empirically, that the TDS encodes complex linguistic patterns based on co-occurrences and possibly also on grammatical and syntaxic analysis. 4 Experiments 4.1 Datasets In order to understand what the linguistic markers found by the convolutional neural network approach are, we conducted several tests on different languages and our model seems to get the same behavior in all of them. In order to perform all the linguistic statistical tests, we used our own simple linguistic toolbox Hyperbase, which allows the creation of databases from textual corpus, the analysis and the calculations such as z-test, cooccurrences, PCA, K-Means distance,... We use it to evaluate TDS against z-test scoring. We compel our analysis by only presenting cases on which ztest fail while TDS does not. Indeed TDS captures z-test, as we did not find any sentence on which z-test succeeds while TDS fails. Red words in the studied examples are the highest TDS. The first dataset we used for our experiments is the well known IMDB movie review corpus for sentiment classification. It consists of 25,000 reviews labeled by positive or negative sentiment with around 230,000 words. The second dataset targets French political discourses. It is a corpus of 2.5 millions of words of French Presidents from 1958 (with De Gaulle, the first President of the Fifth Republic) to 2018 with the first speeches by Macron. In this corpus we have removed Macron’s speech from the 31st of December 2017, to use it as a test data set. The training task is to recognize each french president. The last dataset we used is based on Latin. We assembled a contrastive corpus of 2 million words with 22 principle authors writting in classical Latin. As with the French dataset, the learning task here is to be able to predict each author according to new sequences of words. The next example is an excerpt of chapter 26 of the 23th book of Livy: [...] tutus tenebat se quoad multum ac diu obtestanti quattuor milia peditum et quingenti equites in supplementum missi ex Africa sunt . tum refecta tandem spe castra propius hostem mouit classem que et ipse instrui parari que iubet ad insulas maritimam que oram tutandam . in ipso impetu mouendarum de [...] 4.2 Z-test Versus Text Deconvolution Saliency Z-test is one of the standard metrics used in linguistic statistics, in particular to measure the occurrences of word collocations Manning and Sch¨utze (1999). Indeed, the z-test provides a statistical score of the co-occurrence of a sequence of words to appear more frequently than any other sequence of words of the same length. This score results from the comparison between the frequency of the observerd word sequence with the frequency expected in the case of a ”Normal” distribution. In the context of constrative corpus analysis, this same calculation applied to single words can readily provide, for example, the most specific vocabulary of a given author. The highest z-test are the most specific words of this given author in this case. This is a simple but strong method for analyzing features of text. It can also be used to classify word sentences according to the global z-test (sum of the scores) of all the words in the given sentence. We can thus use this global z-test as a very simple metric for authorship classification. The resulting authorship of a given sentence is for instance given by the author corresponding to the highest global z-test on that sentence compared to all other global z-test obtained by summing up the z-test of each word of the same sentence but with the vocabulary specificity of another author. The mean accuracy of assigning the right author to the right sentence, in our data set, is around 87%, which confirms that z-test is indeed meaningful for 552 z-test Deep Learning Latin 84% 93% French 89% 91% English 90% 97% Table 1: Test accuray with z-test and Deep Learning contrast pattern analysis. On the other hand, most of the time CNN reaches an accuracy greater than 90% for text classification (as shown in Table 1). This means that the CNN approaches can learn also on their own some of the linguistic specificities useful in discriminating text categories. Previous works on image classification have highlighted the key role of convolutional layers which learn different level of abstractions of the data to make classification easier. The question is: what is the nature of the abstraction on text? We show in this article that CNN approach detects automatically words with high z-test but obviously this is not the only linguistic structure detected. To make the two values comparable, we normalize them. The values can be either positive or negative. And we distinguish between two thresholds1 for the z-test: over 2 a word is considered as specific and over 5 it is strongly specific (and the oposite with negative values). For the TDS it is just a matter of activation strength. The Figure 3 shows us a comparison between z-test and TDS on a sentence extracted from our Latin corpora (Livy Book XXIII Chap. 26). This sentence is an example of specific words used by Livy2. As we can see, when the z-test is the highest, the TDS is also the highest and the TDS values are high also for the neighbor words (for example around the word castra). However, this is not always the case: for example small words as que or et are also high in z-test but they do not impact the network at the same level. We can see also on Figure 3 that words like tenebat, multum or propius are totally uncorrelated. The Pearson cor1The z-test can be approximated by a normal distribution. The score we obtain by the z-test is the standard deviation. A low standard deviation indicates that the data points tend to be close to the mean (the expected value). Over 2 this score means there is less than 2% of chance to have this distribution. Over 5 it’s less than 0.1%. 2Titus Livius Patavinus – (64 or 59 BC - AD 12 or 17) – was a Roman historian. Figure 3: z-test versus Text Deconvolution Saliency (TDS) - Example on Livy Book XXIII Chap. 26 relation coefficient3 tells us that in this sentence there is no linear correlation between z-test and TDS (with a Pearson of 0.38). This example is one of the most correlated examples of our dataset, thus CNN seems to learn more than a simple ztest. 4.3 Dataset: English For English, we used the IMDB movie review corpus for sentiment classification. With the default methods, we can easily show the specific vocabulary of each class (positive/negative), according to the z-test. There are for example the words too, bad, no or boring as most indicitive of negative sentiment, and the words and, performance, powerful or best for positive. Is it enough to detect automatically if a new review is positive or not? Let’s see an example excerpted from a review from December 2017 (not in the training set) on the last American blockbuster: [...] i enjoyed three moments in the film in total , and if i am being honest and the person next to me fell asleep in the middle and started snoring during the slow space chasescenes . the story failed to draw me in and entertain me the way [...] In general the z-test is sufficient to predict the class of this kind of comment. But in this case, the CNN seems to do better, but why? 3Pearson correlation coefficient measures the linear relationship between two datasets. It has a value between +1 and −1, where 1 is total positive linear correlation, 0 is no linear correlation, and −1 is total negative 553 If we sum all the z-test (for negative and positive), the positive class obtains a greater score than the negative. The words film, and, honest and entertain – with scores 5.38, 12.23, 4 and 2.4 – make this example positive. CNN has activated different parts of this sentence (as we show in bold/red in the example). If we take the sub-sequence and if i am being honest and, there are two occurences of and but the first one is followed by if and our toolbox gives us 0.84 for and if as a negative class. This is far from the 12.23 in the positive. And if we go further, we can do a co-occurrence analysis on and if on the training set. As we see with our co-occurrence analysis4 (Figure 4), honest is among the most specific adjectivals5 associated with and if. Exactly what we found in our example. Figure 4: co-occurrences analysis of and if (Hyperbase) In addition, we have the same behavior with the verb fall. There is the word asleep next to it. Asleep alone is not really specific of negative review (z-test of 1.13). But the association of both words become highly specific of negative sentences (see the co-occurrences analysis - Figure 5). 4Those figures shows the major co-occurrences for a given word (or lemma or PartOfSpeech). There two layers of co-occurrences, the first one (on top) show the direct cooccurrence and the second (on bottom) show a second level of co-occurrence. This level is given by the context of two words (taken together). The colors and the dotted lines are only used to make it more readable (dotted lines are used for the first level). The width of each line is related to the z-test score (more the z-test is big, more the line is wide). 5With our toolbox, we can focus on different part of speech. Figure 5: co-occurrences analysis of fall (Hyperbase) The Text Deconvolution Saliency here confirms that the CNN seems to focus not only on high ztest but on more complex patterns and maybe detects the lemma or the part of speech linked to each word. We will see now that these observations are still valid for other languages and can even be generalized between different TDS. 4.4 Dataset: French In this corpus we have removed Macron’s speech from the 31st of December 2017, to use it as a test data set. In this speech, the CNN primarily recognizes Macron (the training task was to be able to predict the correct President). To achieve this task the CNN seems to succeed in finding really complex patterns specific to Macron. For example in this sequence: [...] notre pays advienne `a l’´ecole pour nos enfants, au travail pour l’ ensemble de nos concitoyens pour le climat pour le quotidien de chacune et chacun d’ entre vous . Ces transformations profondes ont commenc´e et se poursuivront avec la mˆeme force le mˆeme rythme la mˆeme intensit´e [...] The z-test gives a result statistically closer to De Gaulle than to Macron. The error in the statistical attribution can be explained by a Gaullist phraseology and the multiplication of linguistic markers strongly indexed with De Gaulle: De Gaulle had the specificity of making long and literary sentences articulated around co-ordination conjunctions as in et (z-test = 28 for de Gaulle, two oc554 Figure 6: Deconvolution on Macron speech. currences in the excerpt). His speech was also more conceptual than average, and this resulted in an over-use of the articles defined le, la, l´, les) very numerous in the excerpt (7 occurrences); especially in the feminine singular (la r´epublique, la libert´e, la nation, la guerre, etc., here we have la mˆeme force, la mˆeme intensit´e. The best results given by the CNN may be surprising for a linguist but match perfectly with what is known about the sociolinguistics of Macron’s dynamic kind of speeches. The part of the excerpt, which impacts most the CNN classification, is related to the nominal syntagm transformations profondes. Taken separately, neither of the phrase’s two words are very Macronian from a statistical point of view (transformations = 1.9 profondes = 2.9). Better, the syntagm itself does not appear in the President’s learning corpus (0 occurrence). However, it can be seen that the co-occurrence of transformation and profondes amounts to 4.81 at Macron: so it is not the occurrence of one word alone, or the other, which is Macronian but the simultaneous appearance of both in the same window. The second and complementary most impacting part of the excerpt thus is related to the two verbs advienne and poursuivront. From a semantic point of view, the two verbs perfectly contribute, after the phrase transformations profondes, to give the necessary dynamic to a discourse that advocates change. However it is the verb tenses (carried by the morphology of the verbs) that appear to be the determining factor in the analysis. The calculation of the grammatical codes co-occurring with the word transformations thus indicates that the verbs in the subjunctive and the verbs in the future (and also the nouns) are the privileged codes for Macron (Figure 7). Figure 7: Main part-of-speech co-occurrences for transformations (Hyperbase) More precisely the algorithm indicates that, for Macron, when transformation is associated with a verb in the subjunctive (here advienne), then there is usually a verb in the future co-present (here poursuivront). transformations profondes, advienne to the subjunctive, poursuivront to the future: all these elements together form a speech promising action, from the mouth of a young and dynamic President. Finally, the graph indicates that transformations is especially associated with nouns in the President’s speeches: in an extraordinary concentration, the excerpt lists 11 (pays, ´ecole, enfants, travail, concitoyens, climat, quotidien, transformations, force, rythme, intensit´e). 4.5 Dataset: Latin As with the French dataset, the learning task here is to be able to predict the identity of each author from a contrastive corpus of 2 million words with 22 principle authors writting in classical Latin. The statistics here identify this sentence as Caesar6 but Livy is not far off. As historians, Caesar and Livy share a number of specific words: for example tool words like se (reflexive pronoun) or que (a coordinator) and prepositions like in, ad, ex, of. There are also nouns like equites (cavalry) or castra (fortified camp). The attribution of the sentence to Caesar cannot only rely only on z-test: que or in or castra, with 6Gaius Julius Caesar, 100 BC - 44 BC, usually called Julius Caesar, was a Roman politician and general and a notable author of Latin prose. 555 differences thereof equivalent or inferior to Livy. On the other hand, the differences of se, ex, are greater, as is that of equites. Two very Caesarian terms undoubtedly make the difference iubet (he orders) and milia (thousands). The greater score of quattuor (four), castra, hostem (the enemy), impetu (the assault) in Livy are not enough to switch the attribution to this author. On the other hand, CNN activates several zones appearing at the beginning of sentences and corresponding to coherent syntactic structures (for Livy) – Tandem reflexes spe castra propius hostem mouit (then, hope having finally returned, he moved the camp closer to the camp of the enemy) – despite the fact that castra in hostem mouit is attested only by Tacitus7. There are also in ipso metu (in fear itself), while in followed by metu is counted one time with Caesar and one time also with Quinte-Curce8. More complex structures are possibly also detected by the CNN: the structure tum + participates Ablative Absolute (tum refecta) is more characteristic of Livy (z-test 3.3 with 8 occurrences) than of Caesar (z-test 1.7 with 3 occurrences), even if it is even more specific of Tacitus (z-test 4.2 with 10 occurrences). Finally and more likely, the co-occurrence between castra, hostem and impetu may have played a major role: Figure 8 Figure 8: Specific co-occurrences between impetu and castra (Hyperbase) With Livy, impetu appears as a co-occurrent 7Publius (or Gaius) Cornelius Tacitus, 56 BC - 120 BC, was a senator and a historian of the Roman Empire. 8Quintus Curtius Rufus was a Roman historian, probably of the 1st century, his only known and only surviving work being ”Histories of Alexander the Great” with the lemmas hostis (z-test 9.42) and castra (ztest 6.75), while hostis only has a gap of 3.41 in Caesar and that castra does not appear in the list of co-occurrents. For castra, the first co-occurent for Livy is hostis (z-test 22.72), before castra (z-test 10.18), ad (z-test 10.85), in (z-test 8.21), impetus (z-test 7.35), que (z-test 5.86) while in Caesar, impetus does not appear and the scores of all other lemmas are lower except castra (z-test 15.15), hostis (8), ad (10,35), in (5,17), que (4.79). Thus, our results suggest that CNNs manage to account for specificity, phrase structure, and cooccurence networks... 4.6 Preprocessings and hyperparameters In order to make our experiments reproductible, we detail here all the hyperparameters used in our architecture. The neural network is written in python with the library Keras (an tensorflow as backend). The embedding uses a Word2Vec implementation given by the gensim Library. Here we use the SkipGram model with a window size of 10 words and output vectors of 128 values (embedding dimension). The textual datas are tokenized by a homemade tokensizer (which work on English, Latin and French). The corpus is splited into 50 length sequence of words (punctuation is keeped) and each word is converted inta an unique vector of 128 value. The first layer of our model takes the text sequence (as word vectors) and applies a weight corresponding to our WordToVec values. Those weights are still trainable during model training. The second layer is the convolution, a Conv2D in Keras with 512 filters of size 3 ∗128 (filtering three words at time), with a Relu activation method. Then, there is the Maxpooling (MaxPooling2D) (The deconvolution model is identical until here. We replace the rest of the classification model (Dense) by a transposed convolution (Conv2DTranspose).) The last layers of the model are Dense layers. One hidden layer of 100 neurons with a Relu activation and one final layer of size equal to the number of classes with a softmax activation. All experiments in this paper share the same architecture and the same hyperparameters, and 556 are trained with a cross-entropy method (with an Adam optimizer) with 90% of the dataset for the training data and 10% for the validation. All the tests in this paper are done with new data not included in the original dataset. 5 Conclusion In a nutshell, Text Deconvolution Saliency is efficient on a wide range of corpora. By crossing statistical approaches with neural networks, we propose a new strategy for automatically detecting complex linguistic observables, which up to now hardly detectable by frequency-based methods. Recall that the linguistic matter and the topology recovered by our TDS cannot return to chance: the zones of activation make it possible to obtain recognition rates of more than 91% on the French political speech and 93% on the Latin corpus; both rates equivalent to or higher than the rates obtained by the statistical calculation of the key passages. Improving the model and understanding all the mathematical and linguistic outcomes remains an import goal. In future work, we intend to thoroughly study the impact of TDS given morphosyntactic information. Acknowledgments This work has been partly funded by the French Government (National Research Agency, ANR) through the grant ANR-16-CE23-0006 Deep in France, through the “Investments for the Future” Program ANR-11-LABX-0031-01, and through the UCAJEDI Investments in the Future project ANR-15-IDEX-01. References Heike Adel and Hinrich Sch¨utze. 2017. Global normalization of convolutional neural networks for joint entity and relation classification. In Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing, pages 1723–1729. Ronan Collobert and Jason Weston. 2008. A unified architecture for natural language processing: Deep neural networks with multitask learning. In Proceedings of the 25th International Conference on Machine Learning, ICML ’08, pages 160–167, New York, NY, USA. ACM. Yann N Dauphin, Angela Fan, Michael Auli, and David Grangier. 2017. Language modeling with gated convolutional networks. In International Conference on Machine Learning, pages 933–941. Feldman, R., and J. Sanger. 2007. The Text Mining Handbook. Advanced Approaches in Analyzing Unstructured Data. New York: Cambridge University Press. Hyperbase. Web based toolbox for linguistics analysis. http://hyperbase.unice.fr. Yangfeng Ji and Jacob Eisenstein. 2014. Representation learning for text-level discourse parsing. In Proceedings of the 52nd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), volume 1, pages 13–24. Nal Kalchbrenner, Edward Grefenstette, and Phil Blunsom. 2014. A convolutional neural network for modelling sentences. In Proceedings of the 52nd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), volume 1, pages 655–665. Andrej Karpathy, Justin Johnson, and Li Fei-Fei. 2015. Visualizing and understanding recurrent networks. arXiv preprint arXiv:1506.02078. Yoon Kim. 2014. Convolutional neural networks for sentence classification. In Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing (EMNLP), pages 1746–1751. L. Lebart, A. Salem and L. Berry. 1998. Exploring Textual Data. Ed. Springer. Jiwei Li, Xinlei Chen, Eduard Hovy, and Dan Jurafsky. 2015. Visualizing and understanding neural models in nlp. arXiv preprint arXiv:1506.01066. Christopher D Manning and Hinrich Sch¨utze. 1999. Foundations of statistical natural language processing. MIT press. S. Mellet and D. Longr´ee. 2009. Syntactical motifs and textual structures. In Belgian Journal of Linguistics 23, pages 161–173. Hyeonwoo Noh, Seunghoon Hong, and Bohyung Han. 2015. Learning deconvolution network for semantic segmentation. In Proceedings of the IEEE International Conference on Computer Vision, pages 1520– 1528. Tsung-Hsien Wen, David Vandyke, Nikola Mrkˇsi´c, Milica Gasic, Lina M Rojas Barahona, Pei-Hao Su, Stefan Ultes, and Steve Young. 2017. A networkbased end-to-end trainable task-oriented dialogue system. In Proceedings of the 15th Conference of the European Chapter of the Association for Computational Linguistics: Volume 1, Long Papers, volume 1, pages 438–449. Wenpeng Yin, Katharina Kann, Mo Yu, and Hinrich Sch¨utze. 2017. Comparative study of cnn and rnn for natural language processing. arXiv preprint arXiv:1702.01923. 557 Matthew D Zeiler and Rob Fergus. 2014. Visualizing and understanding convolutional networks. In European conference on computer vision, pages 818– 833. Springer.
2018
51
Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Long Papers), pages 558–568 Melbourne, Australia, July 15 - 20, 2018. c⃝2018 Association for Computational Linguistics 558 Coherence Modeling of Asynchronous Conversations: A Neural Entity Grid Approach Tasnim Mohiuddin∗and Shafiq Joty∗ Nanyang Technological University {mohi0004,srjoty}@ntu.edu.sg Dat Tien Nguyen∗ University of Amsterdam [email protected] Abstract We propose a novel coherence model for written asynchronous conversations (e.g., forums, emails), and show its applications in coherence assessment and thread reconstruction tasks. We conduct our research in two steps. First, we propose improvements to the recently proposed neural entity grid model by lexicalizing its entity transitions. Then, we extend the model to asynchronous conversations by incorporating the underlying conversational structure in the entity grid representation and feature computation. Our model achieves state of the art results on standard coherence assessment tasks in monologue and conversations outperforming existing models. We also demonstrate its effectiveness in reconstructing thread structures. 1 Introduction Sentences in a text or a conversation do not occur independently, rather they are connected to form a coherent discourse that is easy to comprehend. Coherence models are computational models that can distinguish a coherent discourse from incoherent ones. It has ranges of applications in text generation, summarization, and coherence scoring. Inspired by formal theories of discourse, a number of coherence models have been proposed (Barzilay and Lapata, 2008; Lin et al., 2011; Li and Jurafsky, 2017). The entity grid model (Barzilay and Lapata, 2008) is one of the most popular coherence models that has received much attention over the years. As exemplified in Table 1, the model represents a text by a grid that captures how grammatical roles of different discourse entities (e.g., nouns) change from one sentence to ∗All authors contributed equally. s0: LDI Corp., Cleveland, said it will offer $50 million in commercial paper backed by leaserental receivables. s1: The program matches funds raised from the sale of the commercial paper with small to medium-sized leases. s2: LDI termed the paper “non-recourse financing”, meaning that investors would be repaid from the lease receivables, rather than directly by LDI Corp. s3: LDI leases and sells data-processing, telecommunications and other high-tech equipment. INVESTORS MILLION FUNDS EQUIPMENT CORP. PAPER SALE TELECOMM. LEASE PROGRAM CLEVELAND RECEIVABLES LEASES DATA-PROCESS. LDI NON-RECOURSE s0 − O − − S X − − − − X X − − X − s1 − − O − − X X − − S − − X − − − s2 S − − − X S − − X − − X − − S X s3 − − − O − − − X − − − − − X S − Table 1: Entity grid representation (bottom) for a document (top) from the WSJ corpus. another in the text. The grid is then converted into a feature vector containing probabilities of local entity transitions, enabling machine learning models to measure the degree of coherence. Earlier extensions of this basic model incorporate entityspecific features (Elsner and Charniak, 2011b), multiple ranks (Feng and Hirst, 2012), and coherence relations (Feng et al., 2014). Recently, Nguyen and Joty (2017) proposed a neural version of the grid models. Their model first transforms the grammatical roles in a grid into their distributed representations, and employs a convolution operation over it to model entity transitions in the distributed space. The spatially maxpooled features from the convoluted features are used for coherence scoring. This model achieves state-of-the-art results in standard evaluation tasks on the Wall Street Journal (WSJ) corpus. Although the neural grid model effectively captures long entity transitions, it is still limited in that it does not consider any lexical information regarding the entities, thereby, fails to distinguish 559 between entity types. Although the extended neural grid considers entity features like named entity and proper mention, it requires an explicit feature extraction step, which can prevent us to transfer the model to a resource-poor language or domain. Apart from these limitations, previous research on coherence models has mainly focused on monologic discourse (e.g., news article). The only exception is the work of Elsner and Charniak (2011a), who applied coherence models to the task of conversation disentanglement in synchronous conversations like phone and chat conversations. With the emergence of Internet technologies, asynchronous communication media like emails, blogs, and forums have become a commonplace for discussing events and issues, seeking answers, and sharing personal experiences. Participants in these media interact with each other asynchronously, by writing at different times. We believe coherence models for asynchronous conversations can help many downstream applications in these domains. For example, we will demonstrate later that coherence models can be used to predict the underlying thread structure of a conversation, which provides crucial information for building effective conversation summarization systems (Carenini et al., 2008) and community question answering systems (Barron-Cedeno et al., 2015). To the best of our knowledge, none has studied the problem of coherence modeling in asynchronous conversation before. Because of its asynchronous nature, information flow in these conversations is often not sequential as in monologue or synchronous conversation. This poses a novel set of challenges for discourse analysis models (Joty et al., 2013; Louis and Cohen, 2015). For example, consider the forum conversation in Figure 2(a). It is not obvious how a coherence model like the entity grid can represent the conversation, and use it in downstream tasks effectively. In this paper we aim to remedy the above limitations of existing models in two steps. First, we propose improvements to the existing neural grid model by lexicalizing its entity transitions. We propose methods based on word embeddings to achieve better generalization with the lexicalized model. Second, we adapt the model to asynchronous conversations by incorporating the underlying conversational structure in the grid representation and subsequently in feature computation. For this, we propose a novel grid representation for asynchronous conversations, and adapt the convolution layer of the neural model accordingly. We evaluate our approach on two discrimination tasks. The first task is the standard one, where we assess the models based on their performance in discriminating an original document from its random permutation. In our second task, we ask the models to distinguish an original document from its inverse order of the sentences. For our adapted model to asynchronous conversation, we also evaluate it on thread reconstruction, a task specific to asynchronous conversation. We performed a series of experiments, and our main findings are: (a) Our experiments on the WSJ corpus validate the utility of our proposed extension to the existing neural grid model, yielding absolute F1 improvements of up to 4.2% in the standard task and up to 5.2% in the inverse-order discrimination task, setting a new state-of-the-art. (b) Our experiments on a forum dataset show that our adapted model that considers the conversational structure outperforms the temporal baseline by more than 4% F1 in the standard task and by about 10% F1 in the inverse order discrimination task. (c) When applied to the thread reconstruction task, our model achieves promising results outperforming several strong baselines. We have released our source code and datasets at https://ntunlpsg.github. io/project/coherence/n-coh-acl18/ 2 Background In this section we give an overview of existing coherence models. In the interest of coherence, we defer description of the neural grid model (Nguyen and Joty, 2017) until next section, where we present our extension to this model. 2.1 Traditional Entity Grid Models Introduced by Barzilay and Lapata (2008), the entity grid model represents a text by a twodimensional matrix. As shown in Table 1, the rows correspond to sentences, and the columns correspond to entities (noun phrases). Each entry Ei,j represents the syntactic role that entity ej plays in sentence si, which can be one of: subject (S), object (O), other (X), or absent (–). In cases where an 560 entity appears more than once with different grammatical roles in the same sentence, the role with the highest rank (S ≻O ≻X) is considered. Motivated by the Centering Theory (Grosz et al., 1995), the model considers local entity transitions as the deciding patterns for assessing coherence. A local entity transition of length k is a sequence of {S,O,X,–}k, representing grammatical roles played by an entity in k consecutive sentences. Each grid is represented by a vector of 4k transition probabilities computed from the grid. To distinguish between transitions of important entities from unimportant ones, the model considers the salience of the entities, which is measured by their occurrence frequency in the document. With the feature vector representation, coherence assessment task is formulated as a ranking problem in a SVM preference ranking framework (Joachims, 2002). Barzilay and Lapata (2008) showed significant improvements in two out of three evaluation tasks when a coreference resolver is used to identify coreferent entities in a text. Elsner and Charniak (2011b) show improvements to the grid model by including non-head nouns as entities. Instead of employing a coreference resolver, they match the nouns to detect coreferent entities. They demonstrate further improvements by extending the grid to distinguish between entities of different types. They do so by incorporating entity-specific features like named entity, noun class and modifiers. Lin et al. (2011) model transitions of discourse roles for entities as opposed to their grammatical roles. They instantiate discourse roles by discourse relations in Penn Discourse Treebank (Prasad et al., 2008). In a follow up work, Feng et al. (2014) trained the same model but using relations derived from deep discourse structures annotated with Rhetorical Structure Theory (Mann and Thompson, 1988). 2.2 Other Existing Models Guinaudeau and Strube (2013) proposed a graphbased unsupervised method. They convert an entity grid into a bipartite graph consisting of two sets of nodes, representing sentences and entities, respectively. The edges are assigned weights based on the grammatical role of the entities in the respective sentences. They perform one-mode projections to transform the bipartite graph to a directed graph containing only sentence nodes. The coherence score of the document is then computed as the average out-degree of sentence nodes. Louis and Nenkova (2012) introduced a coherence model based on syntactic patterns by assuming that sentences in a coherent text exhibit certain syntactic regularities. They propose a local coherence model that captures the co-occurrence of structural features in adjacent sentences, and a global model based on a hidden Markov model, which learns the global syntactic patterns from clusters of sentences with similar syntax. Li and Hovy (2014) proposed a neural framework to compute the coherence score of a document by estimating coherence probability for every window of three sentences. They encode each sentence in the window using either a recurrent or a recursive neural network. To get a documentlevel coherence score, they sum up the windowlevel log probabilities. Li and Jurafsky (2017) proposed two encoder-decoder models augmented with latent variables for both coherence evaluation and discourse generation. Their first model incorporates global discourse information (topics) by feeding the output of a sentence-level HMMLDA model (Gruber et al., 2007) into the encoderdecoder model. Their second model is trained end-to-end with variational inference. In our work, we take an entity-based approach, and extend the neural grid model proposed recently by Nguyen and Joty (2017). 3 Extending Neural Entity Grid In this section we first briefly describe the neural entity grid model proposed by Nguyen and Joty (2017). Then, we propose our extension to this model that leads to improved performance. We present our coherence model for asynchronous conversation in the next section. 3.1 Neural Entity Grid Figure 1 depicts the neural grid model of Nguyen and Joty (2017). Given an entity grid E, they first transform each entry Ei,j (a grammatical role) into a distributed representation of d dimensions by looking up a shared embedding matrix M ∈ R|G|×d, where G is the vocabulary of possible grammatical roles, i.e., G = {S, O, X, −}. Formally, the look-up operation can be expressed as: L = h M(E1,1) · · · M(Ei,j) · · · M(EI,J) i (1) where M(Ei,j) refers to the row in M that corresponds to grammatical role Ei,j, and I and J are 561 Figure 1: Neural entity grid model proposed by Nguyen and Joty (2017). The model is trained using a pairwise ranking approach with shared parameters for positive and negative documents. the number of rows (sentences) and columns (entities) in the entity grid, respectively. The result of the look-up operation is a tensor L ∈RI×J×d, which is fed to a convolution layer to model local entity transitions in the distributed space. The convolution layer of the neural network composes patches of entity transitions into highlevel abstract features by treating entities independently (i.e., 1D convolution). Formally, it applies a filter w ∈Rm.d to each local entity transition of length m to generate a new abstract feature zi: zi = h(wT Li:i+m,j + bi) (2) where Li:i+m,j denotes concatenation of m vectors in L for entity ej, bi is a bias term, and h is a nonlinear activation function. Repeated application of this filter to each possible m-length transitions of different entities in the grid generates a feature map, zi = [z1, · · · , zI.J+m−1]. This process is repeated N times with N different filters to get N different feature maps, [z1, · · · , zN]. A max-pooling operation is then applied to extract the most salient features from each feature map: p = [µl(z1), · · · , µl(zN)] (3) where µl(zi) refers to the max operation applied to each non-overlapping window of l features in the feature map zi. Finally, the pooled features are used in a linear layer to produce a coherence score: y = uT p + b (4) where u is the weight vector and b is a bias term. The model is trained with a pairwise ranking loss based on ordered training pairs (Ei, Ej): L(θ) = max{0, 1 −φ(Ei|θ) + φ(Ej|θ)} (5) where entity grid Ei exhibits a higher degree of coherence than grid Ej, and y = φ(Ek|θ) denotes the transformation of input grid Ek to a coherence score y done by the model with parameters θ. We will see later that such ordering of documents (grids) can be obtained automatically by permuting the original document. Notice that the network shares its parameters (θ) between the positive (Ei) and the negative (Ej) instances in a pair. Since entity transitions in the convolution step are modeled in a continuous space, it can effectively capture longer transitions compared to traditional grid models. Unlike traditional grid models that compute transition probabilities from a single grid, convolution filters and role embeddings in the neural model are learned from all training instances, which helps the model to generalize well. Since the abstract features in the feature maps are generated by convolving over role transitions of different entities in a document, the model implicitly considers relations between entities in a document, whereas transition probabilities in traditional entity grid models are computed without considering any such relation between entities. Convolution over the entire grid also incorporates global information (e.g., topic) of a discourse. 3.2 Lexicalized Neural Entity Grid Despite its effectiveness, the neural grid model presented above has a limitation. It does not consider any lexical information regarding the entities, thus, cannot distinguish between transitions of different entities. Although the extended neural grid model proposed in (Nguyen and Joty, 2017) does incorporate entity features like named entity type and proper mention, it requires an explicit feature extraction step using tools like named entity recognizer. This can prevent us in transferring the model to resource-poor languages or domains. To address this limitation, we propose to lexicalize entity transitions. This can be achieved by attaching the entity with the grammatical roles. For example, if an entity ej appears as a subject (S) in sentence si, the grid entry Ei,j will be encoded as ej-S. This way, an entity OBAMA as subject (OBAMA-S) and as object (OBAMA-O) will have separate entries in the embedding matrix M. We can initialize the word-role embeddings randomly, or with pre-trained embeddings for the word (OBAMA). In another variation, we kept word and role embeddings separate and con562 Author: barspinboy Post ID: 1 s0: im having troubles since i uninstall some of my apps, then when i checked my system registry bunch of junks were left behind by the apps i already uninstall. s1: is there any way i could clean my registry aside from expensive registry cleaners. Author: kees bakker Post ID: 2 s2: use regedit to delete the ‘bunch of junks’ you found in registry. s3: regedit is free, but depending on which applications it were .. s4: it’s somewhat doubtful there will be less crashes and faster setup. Author: willy Post ID: 3 s5: i tend to use ccleaner (google for it) as a registry cleaner. s6: using its defaults does pretty well. s7: in no way will it cure any hardcore problems as you mentioned. s8: i further suggest, .. Author: caktus Post ID: 4 s9: try regseeker to clean your registry junk. s10: it’s free and pretty safe to use automatic. s11: then clean temp files (don’t compress any files or use indexing.) s12: if the c drive is compressed, then uncompress it. Author: barspinboy Post ID: 5 s13: thanks guyz, my registry is clean now s14: i tried all those suggestions you mentioned ccleaners regedit defragmentation and uninstalling process; it all worked out (a) A forum conversation p1 s0 s1 p2 s2 s3 s4 p3 s5 s6 s7 s8 p4 s9 s10 s11 s12 p5 s13 s14 (b) Conversational tree p1 O O p2 O – – p3 O – – – p4 O – – – p5 S – (c) Role transition for ‘registry’ registry P2 P1 P0 l0 O O O l1 O O O l2 O O O l3 – – – l4 – – – l5 – – φ l6 S φ φ l7 – φ φ (d) Grid representations Figure 2: (a) A forum conversation, (b) Thread structure of the conversation, (c) Entity role transition over a conversation tree, and (d) 2D role transition matrix for an entity; φ denotes zero-padding. catenated them after the look-up, thus enforcing OBAMA-S and OBAMA-O to share a part of their representations. However, in our experiments, we found the former approach to be more effective. 4 Coherence Models for Asynchronous Conversations The main difference between monologue and asynchronous conversation is that information flow in asynchronous conversation is not sequential as in monologue, rather it is often interleaved. For example, consider the forum conversation in Figure 2(a). There are three possible subconversations, each corresponding to a path from the root node to a leaf node in the conversation graph in Figure 2(b). In response to seeking suggestions about how to clean system registry, the first path (p1←p2) suggests to use regedit, the second path (p1←p3) suggests ccleaner, and the third one (p1←p4) suggests using regseeker. These discussions are interleaved in the chronological order of the posts (p1←p2←p3←p4←p5). Therefore, monologue-based coherence models may not be effective if applied directly to the conversation. We hypothesize that coherence models for asynchronous conversation should incorporate the conversational structure like the tree structure in Figure 2(b), where the nodes represent posts and the edges represent ‘reply-to’ links between them. Since the grid models operate at the sentence level, we construct conversational structure at the sentence level. We do this by linking the boundary sentences across posts and by linking sentences in the same post chronologically. Specifically, we connect the first sentence of post pj to the last sentence of post pi if pj replies to pi, and sentence st+1 is linked to st if both st and st+1 are in the same post.1 Now the question is, how can we represent a conversation tree with an entity grid, and then model entity transitions in the tree? In the following, we describe our approach to this problem. 4.1 Conversational Entity Grid The conversation tree captures how topics flow in an asynchronous conversation. Our key hypothesis is that in a coherent conversation entities exhibit certain local patterns in the conversation tree in terms of their distribution and syntactic realization. Figure 2(c) shows how the grammatical roles of entity ‘registry’ in our example conversation change over the tree. For coherence assessment, we wish to model entity transitions along each of the conversation paths (top-to-bottom), and also their spatial relations across the paths (left-to-right). The existing grid representation is insufficient to model the two-dimensional (2D) spatial entity transitions in a conversation tree. We propose a three-dimensional (3D) grid for representing entity transitions in an asynchronous conversation. The first dimension in our grid rep1The links between sentences are not explicitly shown in Figure 2(b) to avoid visual clutter. 563 Figure 3: Conversational Neural Grid model for assessing coherence in asynchronous conversations. resents entities, while the second and third dimensions represent depth and path of the tree, respectively. Figure 2(d) shows an example representation for an entity ‘registry’. Each column in the matrix represents transitions of the entity along a path, whereas each row represents transitions of the entity at a level of the conversation tree. Although illustrated with a tree structure, our method is applicable to general graph-structured conversations, where a post can reply to multiple previous posts. Our model relies on paths from the root to the leaf nodes, which can be extracted for any graph as long as we avoid loops. 4.2 Modeling Entity Transitions As shown in Figure 3, given a 3D entity grid as input, the look-up layer (Eq. 1) of our neural grid model produces a 4D tensor L∈RI×J×P×d, where I is the total number of entities in the conversation, J is the depth of the tree, P is the number of paths in the tree, and d is the embedding dimension. The convolution layer then uses a 2D filter w ∈Rm.n.d to convolve local patches of entity transitions zi = h(wT Li,j:j+m,p:p+n + bi) (6) where m and n are the height and width of the filter, and Li,j:j+m,p:p+n ∈Rm.n.d denotes a concatenated vector containing (m × n) embeddings representing a 2D window of entity transitions. As we repeatedly apply the filter to each possible window with stride size 1, we get a 2D feature map Zi of dimensions (I.J +m−1)×(I.P +n−1). Employing N different filters, we get N such 2D feature maps, [Z1, · · · , ZN], based on which the max pooling layer extracts the most salient features: p = [µl×w(Z1), · · · , µl×w(ZN)] (7) where µl×w refers to the max operation applied to each non-overlapping 2D window of l×w features in a feature map. The pooled features are then linearized and used for coherence scoring in the final layer of the network as described by Equation 4. 5 Experiments on Monologue To validate our proposed extension to the neural grid model, we first evaluate our lexicalized neural grid model in the standard evaluation setting. Evaluation Tasks and Dataset: We evaluate our models on the standard discrimination task (Barzilay and Lapata, 2008), where a coherence model is asked to distinguish an original document from its incoherent renderings generated by random permutations of its sentences. The model is considered correct if it ranks the original document higher than the permuted one. We use the same train-test split of the WSJ dataset as used in (Nguyen and Joty, 2017) and other studies (Elsner and Charniak, 2011b; Feng et al., 2014). Following previous studies, we use 20 random permutations of each article for both training and testing, and exclude permutations that match the original article. Table 2 gives some statistics about the dataset along with the number of pairs used for training and testing. Nguyen and Joty (2017) randomly selected 10% of the training pairs for development purposes, which we also use for tuning hyperparameters in our models. In addition to the standard setting, we also evaluate our models on an inverse-order setting, where we ask the models to distinguish an original document from the inverse order of its sentences (i.e., from last to first). The transitions of roles in a negative grid are in the reverse order of the original grid. We do not train our models explicitly on this task, rather use the trained model from the standard setting. The number of test pairs in this setting is same as the number of test documents. Model Settings and Training: We train the neural models with the pairwise ranking loss in Equation 5. For a fair comparison, we use 564 Sections # Doc. Avg. # Sen. # Pairs Train 00-13 1,378 21.5 26,422 Test 14-24 1,053 22.3 20,411 Table 2: Statistics on the WSJ dataset. similar model settings as in (Nguyen and Joty, 2017)2 – ReLU as activation functions (h), RMSprop (Tieleman and Hinton, 2012) as the learning algorithm, Glorot-uniform (Glorot and Bengio, 2010) for initializing weight matrices, and uniform U(−0.01, 0.01) for initializing embeddings randomly. We applied batch normalization (Ioffe and Szegedy, 2015), which gave better results than using dropout. Minibatch size, embedding size and filter number were fixed to 32, 300 and 150, respectively. We tuned for optimal filter and pooling lengths in {2, · · · , 12}. We train up to 25 epochs, and select the model that performs best on the development set; see supplementary documents for best hyperparameter settings for different models. We run each experiment five times, each time with a different random seed, and we report the average of the runs to avoid any randomness in results. Statistical significance tests are done using an approximate randomization test with SIGF V.2 (Pad´o, 2006). Results and Discussions: We present our results on the standard discrimination task and the inverse-order task in Table 3; see Std (F1) and Inv (F1) columns, respectively. For space limitations, we only show F1 scores here, and report both accuracy and F1 in the supplementary document. We compare our lexicalized models (group III) with the unlexicalized models (group II) of Nguyen and Joty (2017).3 We also report the results of non-neural entity grid models (Elsner and Charniak, 2011b) in group I. The extended versions use entity-specific features. We experimented with both random and pretrained initialization for word embeddings in our lexicalized models. As can be noticed in Table 3, both versions give significant improvements over the unlexicalized models on both the standard and the inverse-order discrimination tasks (2.7 4.3% absolute). Our best model with Google pretrained embeddings (Mikolov et al., 2013) yields state-of-the-art results. We also experimented 2https://ntunlpsg.github.io/project/coherence/n-coh-acl17 3Our reproduced results for the neural grid model are slightly lower than their reported results (∼1%). We suspect this is due to the randomness in the experimental setup. Model Emb. Std (F1) Inv (F1) I Grid (E&C) 81.60 75.78 Ext. Grid (E&C) 84.95 80.34 II Neural Grid (N&J) Random 84.36 83.94 Ext. Neural Grid (N&J) Random 85.93 83.00 III Lex. Neural Grid Random 87.03† 86.88† Lex. Neural Grid Google 88.56† 88.23† Table 3: Discrimination results on the WSJ dataset. Superscript † indicates a lexicalized model is significantly superior to the unlexicalized Neural Grid (N&J) model with p-value < 0.01. with Glove (Pennington et al., 2014), which has more vocabulary coverage than word2vec – Glove covers 89.77% of our vocabulary items, whereas word2vec covers 85.66%. However, Glove did not perform well giving F1 score of 86% in the standard discrimination task. Schnabel et al. (2015) also report similar results where word2vec was found to be superior to Glove in most evaluation tasks. Our model also outperforms the extended neural grid model that relies on an additional feature extraction step for entity features. These results demonstrate the efficacy of lexicalization in capturing fine-grained entity information without loosing generalizability, thanks to distributed representation and pre-trained embeddings. 6 Experiments on Conversation We evaluate our coherence models for asynchronous conversations on two tasks: discrimination and thread reconstruction. 6.1 Evaluation on Discrimination The discrimination tasks are applicable to conversations also. We first present the dataset we use, then we describe how we create coherent and incoherent examples to train and test our models. Dataset: Our conversational corpus contains discussion threads regarding computer troubleshooting from the technology related news site CNET.4 This corpus was originally collected by Louis and Cohen (2015), and it contains 13,352 threads. For our experiments, we selected 3,825 threads assuring that each contains at least 3 and at most 15 posts. We use 2,400 threads for training, 750 for testing and 675 for development purposes. Table 4 shows some basic statistics about the resulting dataset. The threads roughly contain 29 sentences and 6 comments on average. 4https://www.cnet.com/ 565 #Thread Avg Com Avg Sen #Pairs (tree) #Pairs (path) Train 2,400 6.01 28.76 47,948 106,122 Test 750 5.75 27.79 14,986 33,852 Dev 675 6.27 30.70 13,485 28,897 Total 3,825 5.98 28.77 76,419 168,871 Table 4: Statistics on the CNET dataset. Model Settings and Training: To validate the efficacy of our conversational grid model, we compare it with the following baseline settings: • Temporal: In the temporal setting, we construct an entity grid from the chronological order of the sentences in a conversation, and use it with our monologue-based coherence models. Models in this setting thus disregard the structure of the conversation and treat it as a monologue. • Path-level: This is a special case of our model, where we consider each path (a column in our conversational grid) in the conversation tree separately. We construct an entity grid for a path and provide as input to our monologue-based models. To train the models with pairwise ranking, we create 20 incoherent conversations for each original conversation by shuffling the sentences in their temporal order. For models involving conversation trees (path-level and our model), the tree structure remains unchanged for original and permuted conversations, only the position of the sentences vary based on the permutation. Since the shuffling is done globally at the conversation level, this scheme allows us to compare the three representations (temporal, path-level and tree-level) fairly with the same set of permutations. An incoherent conversation may have paths in the tree that match the original paths. We remove those matched paths when training the path-level model. See Table 4 for number of pairs used for training and testing our models. We evaluate pathlevel models by aggregating correct/wrong decisions for the paths – if the model makes more correct decisions for the original conversation than the incoherent one, it is counted as a correct decision overall. Aggregating path-level coherence scores (e.g., by averaging or summing) would allow a coherence model to get awarded for assigning higher score to an original path (hence, correct) while making wrong decisions for the rest; see supplementary document for an example. Similar to the setting in Monologue, we did not train explicitly on the inverse-order task, rather use the trained model from the standard setting. Conv. Rep Model Emb. Std (F1) Inv (F1) Temporal Neural Grid (N&J) random 82.28 70.53 Lex. Neural Grid random 86.63 80.40 Lex. Neural Grid Google 87.17 80.76 Path-level Neural Grid (N&J) random 82.39 75.68† Lex. Neural Grid random 88.13 88.38† Lex. Neural Grid Google 88.44 89.31† Tree-level Neural Grid (N&J) random 83.98† 77.33† Lex. Neural Grid random 89.87† 89.23† Lex. Neural Grid Google 91.29† 90.40† Table 5: Discrimination results on CNET. Superscript † indicates a model is significantly superior to its temporal counterpart with p-value < 0.01. Results and Discussions: Table 5 compares the results of our models on the two discrimination tasks. We observe more gains in conversation than in monologue for the lexicalized models – 4.9% to 7.3% on the standard task, and 10% to 13.6% on the inverse-order task. Notice especially the huge gains on the inverse-order task. This indicates lexicalization helps to better adapt to new domains. A comparison of the results on the standard task across the representations shows that path-level models perform on par with the temporal models, whereas the tree-level models outperform others by a significant margin. The improvements are 2.7% for randomly initialized word vectors and 4% for Google embeddings. Although, the pathlevel model considers some conversational structures, it observes only a portion of the conversation in its input. The common topics (expressed by entities) of a conversation get distributed across multiple conversational paths. This limits the pathlevel model to learn complex relationships between entities in a conversation. By encoding an entire conversation into a single grid and by modeling the spatial relations between the entities, our conversational grid model captures both local and global information (topic) of a conversation. Interestingly, the improvements are higher on the inverse-order task for both path- and tree-level models. The inverse order yields more dissimilarity at the paths with respect to the original order, thus making them easier to distinguish. If we notice the hyperparameter settings for the best models on this task (see supplementary document), we see they use a filter width of 1. This indicates that to find the right order of the sentences in conversations, it is sufficient to consider entity transitions along the conversational paths in a tree. 566 6.2 Evaluation on Thread Reconstruction One crucial advantage of our tree-level model over other models is that we can use it to build predictive models to uncover the thread structure of a conversation from its posts. Consider again the thread in Figure 2. Our goal is to train a coherence model that can recover the tree structure in Figure 2(b) from the sequence of posts (p1, p2, . . . , p5). This task has been addressed previously (Wang et al., 2008, 2011). Most methods learn an edgelevel classifier to decide for a possible link between two posts using features like distance in position/time, cosine similarity, etc. To our knowledge, we are the first to use coherence models for this problem. However, our goal in this paper is not to build a state-of-the-art system for thread reconstruction, rather to evaluate coherence models by showing its effectiveness in scoring candidate tree hypotheses. In contrast to previous methods, our approach therefore considers the whole thread structure at once, and computes coherence scores for all possible candidate trees of a conversation. The tree that receives the highest score is predicted as the thread structure of the conversation. Training: We train our coherence model for thread reconstruction using pairwise ranking loss as before. For a given sequence of comments in a thread, we construct a set of valid candidate trees; a valid tree is one that respects the chronological order of the comments, i.e., a comment can only reply to a comment that precedes it. The training set contains ordered pairs (Ti, Tj), where Ti is a true (gold) tree and Tj is a valid but false tree. Experiments: The number of valid trees grows exponentially with the number of posts in a thread, which makes the inference difficult. As a proof of concept that coherence models are useful for finding the right tree, we built a simpler dataset by selecting forum threads from the CNET corpus ensuring that a thread contains at most 5 posts. The final dataset contains 1200 threads with an average of 3.8 posts and 27.64 sentences per thread. We assess the performance of the models at two levels: (i) thread-level, where we evaluate if the model could identify the entire conversation thread correctly, and (ii) edge-level, where we evaluate if the model could identify individual replies correctly. For comparison, we use a number of simple but well performing baselines: • All-previous creates thread structure by linking Thread-level Edge-level Acc F1 Acc All-previous 27.00 52.00 61.83 All-first 25.67 48.23 58.19 COS-sim 27.66 50.56 60.30 Conv. Entity Grid 30.33† 53.59† 62.81† Table 6: Thread reconstruction results; † indicates significant difference from COS-sim (p< .01). a comment to its previous (in time) comment. • All-first creates thread structure by linking all the comments to the initial comment. • COS-sim creates thread structure by linking a comment to one of the previous comments with which it has the highest cosine similarity. We use TF.IDF representation for the comments. Table 6 compares our best conversational grid model (tree-level with Google vectors) with the baselines. The low thread-level accuracy across all the systems prove that reconstructing an entire tree is a difficult task. Models are reasonably accurate at the edge level. Our coherence model shows promising results, yielding substantial improvements over the baselines. It delivers 2.7% improvements in thread-level and 2.5% in edgelevel accuracy over the best baseline (COS-sim). Interestingly, our best model for this task uses a filter width of 2 (maximum can be 4 for 5 posts). This indicates that spatial (left-to-right) relations between entity transitions are important to find the right thread structure of a conversation. 7 Conclusion We presented a coherence model for asynchronous conversations. We first extended the existing neural grid model by lexicalizing its entity transitions. We then adapt the model to conversational discourse by incorporating the thread structure in its grid representation and feature computation. We designed a 3D grid representation for capturing spatio-temporal entity transitions in a conversation tree, and employed a 2D convolution to compose high-level features from this representation. Our lexicalized grid model yields state of the art results on standard coherence assessment tasks in monologue and conversations. We also show a novel application of our model in forum thread reconstruction. Our future goal is to use the coherence model to generate new conversations. 567 References Alberto Barron-Cedeno, Simone Filice, Giovanni Da San Martino, Shafiq Joty, Llu´ıs M`arquez, Preslav Nakov, and Alessandro Moschitti. 2015. Threadlevel information for comment classification in community question answering. In Proceedings of the 53rd Annual Meeting of the Association for Computational Linguistics and the 7th International Joint Conference on Natural Language Processing, ACL’15, pages 687–693, Beijing, China. Association for Computational Linguistics. Regina Barzilay and Mirella Lapata. 2008. Modeling local coherence: An entity-based approach. Computational Linguistics, 34(1):1–34. Giuseppe Carenini, Raymond T. Ng, and Xiaodong Zhou. 2008. Summarizing emails with conversational cohesion and subjectivity. In Proceedings of the 46nd Annual Meeting on Association for Computational Linguistics, ACL’08, pages 353–361, OH. ACL. Micha Elsner and Eugene Charniak. 2011a. Disentangling chat with local coherence models. In Proceedings of the 49th Annual Meeting of the Association for Computational Linguistics: Human Language Technologies - Volume 1, HLT ’11, pages 1179– 1189, Stroudsburg, PA, USA. Association for Computational Linguistics. Micha Elsner and Eugene Charniak. 2011b. Extending the entity grid with entity-specific features. In Proceedings of the 49th Annual Meeting of the Association for Computational Linguistics: Human Language Technologies: Short Papers - Volume 2, HLT ’11, pages 125–129, Portland, Oregon. Association for Computational Linguistics. Vanessa Wei Feng and Graeme Hirst. 2012. Extending the entity-based coherence model with multiple ranks. In Proceedings of the 13th Conference of the European Chapter of the Association for Computational Linguistics, EACL ’12, pages 315–324, Avignon, France. Association for Computational Linguistics. Vanessa Wei Feng, Ziheng Lin, and Graeme Hirst. 2014. The impact of deep hierarchical discourse structures in the evaluation of text coherence. In COLING. Xavier Glorot and Yoshua Bengio. 2010. Understanding the difficulty of training deep feedforward neural networks. In JMLR W&CP: Proceedings of the Thirteenth International Conference on Artificial Intelligence and Statistics (AISTATS 2010), volume 9, pages 249–256, Sardinia, Italy. Barbara J. Grosz, Scott Weinstein, and Aravind K. Joshi. 1995. Centering: A framework for modeling the local coherence of discourse. Comput. Linguist., 21(2):203–225. Amit Gruber, Yair Weiss, and Michal Rosen-Zvi. 2007. Hidden topic markov models. In Proceedings of the Eleventh International Conference on Artificial Intelligence and Statistics, volume 2 of Proceedings of Machine Learning Research, pages 163–170, San Juan, Puerto Rico. PMLR. Camille Guinaudeau and Michael Strube. 2013. Graph-based local coherence modeling. In Proceedings of the 51st Annual Meeting of the Association for Computational Linguistics, ACL 2013, 4-9 August 2013, Sofia, Bulgaria, Volume 1: Long Papers, pages 93–103. Sergey Ioffe and Christian Szegedy. 2015. Batch normalization: Accelerating deep network training by reducing internal covariate shift. In Proceedings of the 32Nd International Conference on International Conference on Machine Learning - Volume 37, ICML’15, pages 448–456. JMLR.org. Thorsten Joachims. 2002. Optimizing search engines using clickthrough data. In Proceedings of the Eighth ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, KDD ’02, pages 133–142, Edmonton, Alberta, Canada. ACM. Shafiq Joty, Giuseppe Carenini, and Raymond T. Ng. 2013. Topic segmentation and labeling in asynchronous conversations. J. Artif. Int. Res., 47(1):521–573. Jiwei Li and Eduard Hovy. 2014. A model of coherence based on distributed sentence representation. In Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing (EMNLP), pages 2039–2048, Doha, Qatar. Association for Computational Linguistics. Jiwei Li and Dan Jurafsky. 2017. Neural net models of open-domain discourse coherence. In Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing, pages 198–209, Copenhagen, Denmark. Association for Computational Linguistics. Ziheng Lin, Hwee Tou Ng, and Min-Yen Kan. 2011. Automatically evaluating text coherence using discourse relations. In Proceedings of the 49th Annual Meeting of the Association for Computational Linguistics: Human Language Technologies - Volume 1, HLT ’11, pages 997–1006, Portland, Oregon. Association for Computational Linguistics. Annie Louis and Shay B. Cohen. 2015. Conversation trees: A grammar model for topic structure in forums. In EMNLP. Annie Louis and Ani Nenkova. 2012. A coherence model based on syntactic patterns. In Proceedings of the 2012 Joint Conference on Empirical Methods in Natural Language Processing and Computational Natural Language Learning, EMNLP-CoNLL ’12, pages 1157–1168, Stroudsburg, PA, USA. Association for Computational Linguistics. 568 William C Mann and Sandra A Thompson. 1988. Rhetorical structure theory: Toward a functional theory of text organization. Text, 8(3):243–281. Tomas Mikolov, Ilya Sutskever, Kai Chen, Greg S Corrado, and Jeff Dean. 2013. Distributed representations of words and phrases and their compositionality. In Advances in neural information processing systems, pages 3111–3119. Dat Nguyen and Shafiq Joty. 2017. A neural local coherence model. In Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 1320– 1330. Association for Computational Linguistics. Sebastian Pad´o. 2006. User’s guide to sigf: Significance testing by approximate randomisation. Jeffrey Pennington, Richard Socher, and Christopher D. Manning. 2014. Glove: Global vectors for word representation. In Empirical Methods in Natural Language Processing (EMNLP), pages 1532– 1543. Rashmi Prasad, Nikhil Dinesh, Alan Lee, Eleni Miltsakaki, Livio Robaldo, Aravind Joshi, and Bonnie Webber. 2008. The penn discourse treebank 2.0. In Proceedings of the Sixth International Conference on Language Resources and Evaluation (LREC’08), Marrakech, Morocco. European Language Resources Association (ELRA). Tobias Schnabel, Igor Labutov, David M Mimno, and Thorsten Joachims. 2015. Evaluation methods for unsupervised word embeddings. In Empirical Methods in Natural Language Processing (EMNLP), pages 298–307. T. Tieleman and G Hinton. 2012. RMSprop. COURSERA: Neural Networks Li Wang, Marco Lui, Su Nam Kim, Joakim Nivre, and Timothy Baldwin. 2011. Predicting thread discourse structure over technical web forums. In Proceedings of the Conference on Empirical Methods in Natural Language Processing, EMNLP ’11, pages 13– 25, Stroudsburg, PA, USA. Association for Computational Linguistics. Yi-Chia Wang, Mahesh Joshi, William Cohen, and Carolyn Ros. 2008. Recovering implicit thread structure in newsgroup style conversations. In Proceedings of the Eleventh International Conference on Web and Social Media, ICWSM 2008.
2018
52
Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Long Papers), pages 569–578 Melbourne, Australia, July 15 - 20, 2018. c⃝2018 Association for Computational Linguistics 569 Deep Reinforcement Learning for Chinese Zero pronoun Resolution Qingyu Yin], Yu Zhang], Weinan Zhang], Ting Liu]⇤, William Yang Wang[ ]Harbin Institute of Technology, China [University of California, Santa Barbara, USA {qyyin, yzhang, wnzhang, tliu}@ir.hit.edu.cn [email protected] Abstract Deep neural network models for Chinese zero pronoun resolution learn semantic information for zero pronoun and candidate antecedents, but tend to be short-sighted— they often make local decisions. They typically predict coreference chains between the zero pronoun and one single candidate antecedent one link at a time, while overlooking their long-term influence on future decisions. Ideally, modeling useful information of preceding potential antecedents is critical when later predicting zero pronoun-candidate antecedent pairs. In this study, we show how to integrate local and global decision-making by exploiting deep reinforcement learning models. With the help of the reinforcement learning agent, our model learns the policy of selecting antecedents in a sequential manner, where useful information provided by earlier predicted antecedents could be utilized for making later coreference decisions. Experimental results on OntoNotes 5.0 dataset show that our technique surpasses the state-of-the-art models. 1 Introduction Zero pronoun, as a special linguistic phenomenon in pro-dropped languages, is pervasive in Chinese documents (Zhao and Ng, 2007). A zero pronoun is a gap in the sentence, which refers to the component that is omitted because of the coherence of language. Following shows an example of zero pronoun in Chinese document, where zero pronouns are represented as “φ”. [Sã∫Nö鼎] dÜ h: φ1 #6 •◊F φ2 _ 希望˝∂Å ∫负#⇥ ⇤Corresponding author. ([Litigant Li Yading] not only shows φ1 willing of acception, but also φ2 hopes that there should be someone in charge of it.) A zero pronoun can be an anaphoric zero pronoun if it coreferes to one or more mentions in the associated text, or unanaphoric, if there are no such mentions. In this example, the second zero pronoun “φ2” is anaphoric and corefers to the mention “Sã∫Nö鼎/Litigant Li Yading” while the zero pronoun “φ1” is unanaphoric. These mentions that contain the important information for interpreting the zero pronoun are called the antecedents. In recent years, deep learning models for Chinese zero pronoun resolution have been widely investigated (Chen and Ng, 2016; Yin et al., 2017a,b). These solutions concentrate on anaphoric zero pronoun resolution, applying numerous neural network models to zero pronouncandidate antecedent prediction. Neural network models have demonstrated their capabilities to learn vector-space semantics of zero pronouns and their antecedents (Yin et al., 2017a,b), and substantially surpass classic models (Zhao and Ng, 2007; Chen and Ng, 2013, 2015), obtaining stateof-the-art results on the benchmark dataset. However, these models are heavily making local coreference decisions. They simply consider the coreference chain between the zero pronoun and one single candidate antecedent one link at a time while overlooking their impacts on future decisions. Intuitively, antecedents provide key linguistic cues for explaining the zero pronoun, it is therefore reasonable to leverage useful information provided by previously predicted antecedents as cues for predicting the later zero pronoun-candidate antecedent pairs. For instance, given a sentence “I have confidence that φ can do it.” with its candidate mentions “he”, “the boy” and “I”, it is challenging to infer whether mention “I” is pos570 sible to be the antecedent if it is considered separately. In that case, the resolver may incorrectly predict “I” to be the antecedent since “I” is the nearest mention. Nevertheless, if we know that “he” and “the boy” have already been predicted to be the antecedents, it is uncomplicated to infer the φ-“I” pair as “non-coreference” because “I” corefers to the disparate entity that is refered by “he”. Hence, a desirable resolver should be able to 1) take advantage of cues of previously predicted antecedents, which could be incorporated to help classify later candidate antecedents and 2) model the long-term influence of the single coreference decision in a sequential manner. To achieve these goals, we propose a deep reinforcement learning model for anaphoric zero pronoun resolution. On top of the neural network models (Yin et al., 2017a,b), two main innovations are introduced that are capable of efficaciously leveraging effective information provided by potential antecedents, and making long-term decisions from a global perspective. First, when dealing with a specific zero pronoun-candidate antecedent pair, our system encodes all its preceding candidate antecedents that are predicted to be the antecedents in the vector space. Consequently, this representative vector is regarded as the antecedent information, which can be utilized to measure the coreference probability of the zero pronoun-candidate antecedent pair. In addition, the policy-based deep reinforcement learning algorithm is applied to learn the policy of making coreference decisions for zero pronoun-candidate antecedent pairs. The innovative idea behind our reinforcement learning model is to model the antecedent determination as a sequential decision process, where our model learns to link the zero pronoun to its potential antecedents incrementally. By encoding the antecedents predicted in previous states, our model is capable of exploring the longterm influence of independent decisions, producing more accurate results than models that simply consider the limited information in one single state. Our strategy is favorable in the following aspects. First, the proposed model learns to make decisions by linguistic cues of previously predicted antecedents. Instead of simply making local decisions, our technique allows the model to learn which action (predict to be an antecedent) available from the current state can eventually lead to a high-scoring overall performance. Second, instead of requiring supervised signals at each time step, deep reinforcement learning model optimizes its policy based on an overall reward signal. In other words, it learns to directly optimize the overall evaluation metrics, which is more effective than models that learn with loss functions that heuristically define the goodness of a particular single decision. Our experiments are conducted on the OntoNotes dataset. Comparing to baseline systems, our model obtains significant improvements, achieving the state-of-the-art performance for zero pronoun resolution. The major contributions of this paper are three-fold. • We are the first to consider reinforcement learning models for zero pronoun resolution in Chinese documents; • The proposed deep reinforcement learning model leverages linguistic cues provided by the antecedents predicted in earlier states when classifying later candidate antecedents; • We evaluate our reinforcement learning model on a benchmark dataset, where a considerable improvement is gained over the state-of-the-art systems. The rest of this paper is organized as follows. The next section describes our deep reinforcement learning model for anaphoric zero pronoun resolution. Section 3 presents our experiments, including the dataset description, evaluation metrics, experiment results, and analysis. We outline related work in Section 4. The Section 5 is about the conclusion and future work. 2 modelology In this section, we introduce the technical details of the proposed reinforcement learning framework. The specific task of anaphoric zero pronoun resolution is to select antecedents from candidate antecedents for the zero pronoun. Here we formulate it as a sequential decision process in a reinforcement learning setting. We first describe the environment of the Markov decision making process and our reinforcement learning agent. Then, we introduce the modules. The last subsection is about the supervised pre-training technique of our model. 571 ... ... ... Agent Agent Agent ... ... ... ZP NPk Antecedent Result ... ... ZP JJJG ZP JJJG ZP JJJG 1 NP 2 NP n NP 1S 2 S n S 1 NP . . . 1 NP r NP k NP Figure 1: Illustration of our reinforcement learning framework. Given a zero pronoun with n candidate antecedents (presented as “NP”), for each time, the agent scores pairs of zero pronoun-candidate antecedent for their likelihood of coreference by 1) zero pronoun; 2) candidate antecedent and 3) antecedent information. Antecedent information at time t is generated by all the antecedents predicted in previous states. 2.1 Reinforcement Learning for Zero Pronoun Resolution Given an anaphoric zero pronoun zp, a set of candidate antecedents are required to be selected from its associated text. In particular, we adopt the heuristic model utilized in recent Chinese anaphoric zero pronoun resolution work (Chen and Ng, 2016; Yin et al., 2017a,b) for this purpose. For those noun phrases that are two sentences away at most from the zero pronoun, we select those who are maximal noun phrases or modifier ones to compose the candidate set. These noun phrases ({np1, np2, ..., npn}) and the zero pronoun (zp) are then encoded into representation vectors: {vnp1, vnp2, ..., vnpn} and vzp. Previous neural network models (Chen and Ng, 2016; Yin et al., 2017a,b) generally consider some pairwise models to select antecedents. In these work, candidate antecedents and the zero pronoun are first merged into pairs {(zp, np1), (zp, np2), ..., (zp, npn)}, and then different neural networks are applied to deal with each pair independently. We argue that these models only make local decisions while overlooking their impacts on future decisions. In contrast, we formulate the antecedent determination process in as Markov decision process problem. An innovative reinforcement learning algorithm is designed that learns to classify candidate antecedents incrementally. When predicting one single zero pronoun-candidate antecedent pair, our model leverages antecedent information generated by previously predicted antecedents, making coreference decisions based on global signals. The architecture of our reinforcement learning framework is shown in Figure 1. For each time step, our reinforcement learning agent predicts the zero pronoun-candidate antecedent pair by using 1) the zero pronoun; 2) information of current candidate antecedent and 3) antecedent information generated by antecedents predicted in previous states. In particular, our reinforcement learning agent is designed as a policy network ⇡✓(s, a) = p(a|s; ✓), where s represents the state; a indicates the action and ✓represents the parameters of the model. The parameters ✓are trained using stochastic gradient descent. Compared with Deep Q-Network (Mnih et al., 2013) that commonly learns a greedy policy, policy network is able to learn a stochastic policy that prevents the agent from getting stuck at an intermediate state (Xiong et al., 2017). Additionally, the learned policy is more explainable, comparing to learned value functions in Deep Q-Network. We here introduce the definitions of components of our reinforcement learning model, namely, state, action 572 and reward. 2.1.1 State Given a zero pronoun zp with its representation vzp and all of its candidate antecedents representations {vnp1, vnp2, ..., vnpn}, our model generate coreference decisions for zero pronoun-candidate antecedent pairs in sequence. More specifically, for each time, the state is generated by using both the vectors of the current zero pronoun-candidate antecedent pair and candidates that have been predicted to be the antecedents in the previous states. For time t, the state vector st is generated as follows: st = (vzp, vnpt, vante(t), vfeaturet) (1) where vzp and vnpt are the vectors of zp and npt at time t. As shown in Chen and Ng (2016), humandesigned handcrafted features are essential for the resolver since they reveal the syntactical, positional and other relations between a zero pronoun and its counterpart antecedents. Hence, to evaluate the coreference possibility of each candidate antecedent in a comprehensive manner, we integrate a group of features that are utilized in previous work (Zhao and Ng, 2007; Chen and Ng, 2013, 2016) into our model. For these multivalue features, we decompose them into a corresponding set of binary-value ones. vfeaturet represents the feature vector. vante(t) represents the antecedent information generated by candidates that have been predicted to be antecedents in previous states. After that, these vectors are concatenated to be the representation of state and fed into the deep reinforcement learning agent to generate the action. 2.1.2 Action The action for each state is defined to be: corefer that indicates the zero pronoun and candidate antecedent are coreference; or otherwise, noncorefer. If an action corefer is made, we retain the vector of the counterpart antecedent together with those of the antecedents predicted in previous states to generate the vector vante, which is utilized to produce the antecedent information in the next state. 2.1.3 Reward Normally, once the agent executes a series of actions, it observes a reward R(a1:T ) that could be any function. To encourage the agent to find accurate antecedents, we regard the F-score for the selected antecedents as the reward for each action in a path. 2.2 Reinforcement Learning Agent Basically, our reinforcement learning agent is comprised of three parts, namely, the zero pronoun encoder that learns to encode a zero pronoun into vectors by using its context words; the candidate mention encoder that represents the candidate antecedents by content words; and the agent that maps the state vector s to a probability distribution over all possible actions. In this work, the ZP-centered neural network model proposed by Yin et al. (2017a) is employed to be the zero pronoun encoder. The encoder learns to encode the zero pronoun by its associated text into its vector-space semantics. In particular, two standard recurrent neural networks are employed to encode the preceding text and the following text of a zero pronoun, separately. Such a model learns to encode the associated text around the zero pronoun, exploiting sentence-level information for the zero pronoun. For the candidate mentions encoder, we adopt the recurrent neural network-based model that encodes these phrases by using its content words. More specifically, we utilize a standard recurrent neural network to model the content of a phrase from left to right. This model learns to produce the vector of a phrase by considering its content, providing our model an ability to reveal its vectorspace semantics. In this way, we generate the vector for zp, the vzp, and representation vectors of all its candidate antecedents, which are denoted as {vnp1, vnp2, ..., vnpn}. Moreover, we employ pooling operations to encode antecedent information by using the antecedents that are predicted in previous states. In particular, we generate two vectors by applying the max-pooling and average-pooling, respectively. These two vectors are then concatenated together. Let the representative vector of the tth candidate antecedent to be vnpt 2 Rd, and the predicted antecedents at time t be writen as S(t) = [vnpi, vnpj, ..., vnpr], the vector at time t, vante(t)k is generated by: vante(t)k = ( max{S(t)k,·} for 0 k < d ave{S(t)k−d,·} for d k < 2d 573 Softmax tanh tanh Candidate Antecedents ZP Features Antecedents State action Figure 2: Illustration of the feedforward neural network model employed as the agent. Its input vector includes these parts: (1) Zero pronoun; (2) Candidate Antecedents; (3) Pair Features and (4) Antecedents. By going through all the fullconnected hidden layers and one softmax layer, the agent maps the state vector into the probability distribution over actions that indicates the coreference likelihood of the input zero pronouncandidate antecedent pair. The concatenation of these vectors is regarded as input and is fed into our reinforcement learning agent. More specifically, a feed-forward neural network is utilized to constitute the agent that maps the state vector to a probability distribution over all possible actions. Figure 2 shows the architecture of the agent. Two hidden layers are employed in our model, each of which utilizes the tanh as the activation function. For each layer, we generate the output by: hi(st) = tanh(Wihi−1(st) + bi) (2) where Wi and bi are the parameters of the ith hidden layer; si represents the state vector. After going through all the layers, we can get the representative vector for the zero pronoun-candidate antecedent pair (zp, npt). We then feed it into a scoring-layer to get their coreference score. The scoring-layer is a fully-connected layer of dimension 2: score(zp, npt) = Wsh2(st) + bs (3) where h2 represents the output of the second hidden layer; Ws 2 R2⇥r is the parameter of the layer and r is the dimension of h2. Consequently, we generate the probability distribution over actions using the output generated by the scoring-layer of the neural network, where a softmax layer is employed to gain the probability of each action: p✓(a) / escore(zp,npt) (4) In this work, the policy-based reinforcement learning model is employed to train the parameter of the agent. More specifically, we explore using the REINFORCE policy gradient algorithm (Williams, 1992), which learns to maximize the expected reward: J(✓) = Ea1:T ⇠p(a|zp,npt;✓)R(a1:T ) = X t X a p(a|zp, npt; ✓)R(at) (5) where p(a|zp, npt; ✓) indicates the probability of selecting action a. Intuitively, the estimation of the gradient might have very high variance. One commonly used remedy to reduce the variance is to subtract a baseline value b from the reward. Hence, we utilize the gradient estimate as follows: r✓J(✓) = r✓ X t log p(a|zp, npt; ✓)(R(at) −bt) (6) Following Clark and Manning (2016), we intorduce the baseline b and get the value of bt at time t by Eat0⇠pR(a1, ..., at0, ..., aT ). 2.3 Pretraining Pretraining is crucial in reinforcement learning techniques (Clark and Manning, 2016; Xiong et al., 2017). In this work, we pretrain the model by using the loss function from Yin et al. (2017a): loss = − N X i=1 X np2A(zpi) δ(zpi, np)log(P(np|zpi)) (7) where P(np|zpi) is the coreference score generated by the agent (the probability of choosing corefer action); A(zpi) represents the candidate antecedents of zpi; δ(zp, np) is 1 or 0, representing zp and np are coreference or not. 3 Experiments 3.1 Dataset and Settings 3.1.1 Dataset Same to recent work on Chinese zero pronoun (Chen and Ng, 2016; Yin et al., 2017a,b), the 574 proposed model is evaluated on the Chinese portion of the OntoNotes 5.0 dataset1 that was used in the Conll-2012 Shared Task. Documents in this dataset are from six different sources, namely, Broadcast News (BN), Newswires (NW), Broadcast Conversations (BC), Telephone Conversations (TC), Web Blogs (WB) and Magazines (MZ). Since zero pronoun coreference annotations exist in only the training and development set (Chen and Ng, 2016), we utilize the training dataset for training purposes and test our model on the development set. The statistics of our dataset are reported in Table 1. To make equal comparison, we adopt the strategy as utilized in the existing work (Chen and Ng, 2016; Yin et al., 2017a), where 20% of the training dataset are randomly selected and reserved as a development dataset for tuning the model. #Documents #Sentences #AZPs Training 1,391 36,487 12,111 Test 172 6,083 1,713 Table 1: Statistics on the training and test dataset. 3.1.2 Evaluation Measures Following previous work on zero pronoun resolution (Zhao and Ng, 2007; Chen and Ng, 2016; Yin et al., 2017a,b), metrics employed to evaluate our model are: recall, precision, and F-score (F). We report the performance for each source in addition to the overall result. 3.1.3 Baselines and Experiment Settings Five recent zero pronoun resolution systems are employed as our baselines, namely, Zhao and Ng (2007), Chen and Ng (2015), Chen and Ng (2016), Yin et al. (2017a) and Yin et al. (2017b). The first of them is machine learning-based, the second is the unsupervised and the other ones are all deep learning models. Since we concentrate on the anaphoric zero pronoun resolution process, we run experiments by employing the experiment setting with ground truth parse results and ground truth anaphoric zero pronoun, all of which are from the original dataset. Moreover, to illustrate the effectiveness of our reinforcement learning model, we run a set of ablation experiments by using different pretraining iterations and report the perfor1http://catalog.ldc.upenn.edu/ LDC2013T19 mance of our model with different iterations. Besides, to explore the randomness of the reinforcement learning technique, we report the performance variation of our model with different random seeds. 3.1.4 Implementation Details We randomly initialize the parameters and minimize the objective function using Adagrad (Duchi et al., 2011). The embedding dimension is 100, and hidden layers are 256 and 512 dimensions, respectively. Moreover, the dropout (Hinton et al., 2012) regularization is added to the output of each layer. Table 2 shows the hyperparameters we utilized for both the pre-training and reinforcement learning process. Hyperparameters here are sePre RL hidden dimentions 256 & 512 256 & 512 training epochs 70 50 batch 256 256 dropout rate 0.5 0.7 learning rate 0.003 0.00009 Table 2: Hyperparameters for the pre-training (Pre) and reinforcement learning (RL). lected based on preliminary experiments and there remains considerable space for improvement, for instance, applying the annealing. 3.2 Experiment Results In Table 3, we compare the results of our model with baselines in the test dataset. Our reinforcement learning model surpasses all previous baselines. More specifically, for the “Overall” results, our model obtains a considerable improvement by 2.3% in F-score over the best baseline (Yin et al., 2017a). Moreover, we run experiments in different sources of documents and report the results for each source. The number following a source’s name indicates the amount of anaphoric zero pronoun in that source. Our model beats the best baseline in four of six sources, demonstrating the efficiency of our reinforcement learning model. The improvement gained over the best baseline in source “BC” is 4.3% in F-score, which is encouraging since it contains the most anaphoric zero pronoun. In all words, all these suggest that our model surpasses existed baselines, which demonstrates the efficiency of the proposed technique. Ideally, our model learns useful information 575 NW (84) MZ (162) WB (284) BN (390) BC (510) TC (283) Overall Zhao and Ng (2007) 40.5 28.4 40.1 43.1 44.7 42.8 41.5 Chen and Ng (2015) 46.4 39.0 51.8 53.8 49.4 52.7 50.2 Chen and Ng (2016) 48.8 41.5 56.3 55.4 50.8 53.1 52.2 Yin et al. (2017b) 50.0 45.0 55.9 53.3 55.3 54.4 53.6 Yin et al. (2017a) 48.8 46.3 59.8 58.4 53.2 54.8 54.9 Our model 63.1 50.2 63.1 56.7 57.5 54.0 57.2 Table 3: Experiment results on the test data. The first six columns show the results on different source of documents and the last column is the overall results. gathered from candidates that have been predicted to be the antecedents in previous states, which brings a global-view instead of simply making partial decisions. By applying the reinforcement learning, our model learns to directly optimize the overall performance in expectation, guiding benefit in making decisions in a sequential manner. Consequently, they bring benefit to predict accurate antecedents, leading to the better performance. Moreover, on purpose of better illustrating the effectiveness of the proposed reinforcement learning model, we run a set of experiments with different settings. In particular, we compare the model with and without the proposed reinforcement learning process using different pre-training iterations. For each time, we report the performance of our model on both the test and development set. For all these experiments, we retain the rest of the model unchanged. Figure 3: Experiment results of different models, where “RL” represents the reinforcement learning algorithm and “Pre” presents the model without reinforcement learning. “dev” shows the performance of our reinforcement learning model on the development dataset. Figure 3 shows the performance of our model with and without reinforcement learning. We can see from the table that our model with reinforcement learning achieves better performance than the model without this all across the board. With the help of reinforcement learning, our model learns to choose effective actions in sequential decisions. It empowers the model to directly optimize the overall evaluation metrics, which brings a more effective and natural way of dealing with the task. Moreover, by seeing that the performance on development dataset stops increasing with iterations bigger than 70, we therefore set the pretraining iterations to 70. Following Reimers and Gurevych (2017), to illustrate the impact of randomness in our reinforcement learning model, we run our model with different random seed values. Table 4 shows the performance of our model with different random seeds on the test dataset. We report the minimum, the maximum, the median F-scores results and the standard deviation σ of F-scores. We run Min F Median F Max F σ 56.5 57.1 57.5 0.00253 Table 4: Performance of our model with different random seeds. the model with 38 different random seeds. The maximum F-score is 57.5% and the minimum one is 56.5%. Based on this observation, we can draw the conclusion that our proposed reinforcement learning model generally beats the baselines and achieves the state-of-the-art performance. 3.3 Case Study Lastly, we show a case to illustrate the effectiveness of our proposed model, as is shown in Figure 4. In this case, we can see that our model correctly predict mentions “£✏W/The Xiaohui” 576 那 小穗 她 本来 就是 好 觉得φ 聘 一次 的 话 心里 就 不是 很 有 把握 。  , ,  ,.  φ    ,   ,    ౯ I 我  ֦  ᮎੜᑧ  , , ঄  Figure 4: Example of case study. Noun phrases with pink background color are the ones selected to be the antecedents by our model. and “y/She” as the antecedents of the zero pronoun “φ”. This case demonstrates the efficiency of our model. Instead of making only local decisions, our model learns to predict potential antecedents incrementally, selecting global-optimal antecedents in a sequential manner. In the end, our model successfully predicts “y/She” as the result. 4 Related Work 4.1 Zero Pronoun Resolution A wide variety of techniques for machine learning models for Chinese zero pronoun resolution have been proposed. Zhao and Ng (2007) utilized the decision tree to learn the anaphoric zero pronoun resolver by using syntactical and positional features. It is the first time that machine learning techniques are applied for this task. To better explore syntactics, Kong and Zhou (2010) employed the tree kernel technique in their model. Chen and Ng (2013) extended Zhao and Ng (2007)’s model further by integrating innovative features and coreference chains between zero pronoun as bridges to find antecedents. In contrast, unsupervised techniques have been proposed and shown their efficiency. Chen and Ng (2014) proposed an unsupervised model, where a model trained on manually resolved pronoun was employed for the resolution of zero pronoun. Chen and Ng (2015) proposed an unsupervised anaphoric zero pronoun resolver, using the salience model to deal with the issue. Besides, there has been extensive work on zero anaphora for other languages. Efforts for zero pronoun resolution fall into two major categories, namely, (1) heuristic techniques (Han, 2006); and (2) learning-based models (Iida and Poesio, 2011; Isozaki and Hirao, 2003; Iida et al., 2006, 2007; Sasano and Kurohashi, 2011; Iida and Poesio, 2011; Iida et al., 2015, 2016). In recent years, deep learning techniques have been extensively studied for zero pronoun resolution. Chen and Ng (2016) introduced a deep neural network resolver for this task. In their work, zero pronoun and candidates are encoded by a feedforward neural network. Liu et al. (2017) explored to produce pseudo dataset for anaphoric zero pronoun resolution. They trained their deep learning model by adopting a two-step learning method that overcomes the discrepancy between the generated pseudo dataset and the real one. To better utilize vector-space semantics, Yin et al. (2017b) employed recurrent neural network to encode zero pronoun and antecedents. In particular, a twolayer antecedent encoder was employed to generate the hierarchical representation of antecedents. Yin et al. (2017a) developed an innovative deep memory network resolver, where zero pronouns are encoded by its potential antecedent mentions and associated text. The major difference between our model and existed techniques lies in the applying of deep reinforcement learning. In this work, we formulate the anaphoric zero pronoun resolution as a sequential decision process in a reinforcement learning setting. With the help of reinforcement learning, our resolver learns to classify mentions in a sequential manner, making global-optimal decisions. Consequently, our model learns to take advantage of earlier predicted antecedents when making later coreference decisions. 4.2 Deep Reinforcement Learning Recent advances in deep reinforcement learning have shown promise results in a variety of natural language processing tasks (Branavan et al., 2012; Narasimhan et al., 2015; Li et al., 2016). In recent time, Clark and Manning (2016) proposed a deep reinforcement learning model for coreference resolution, where an agent is utilized for linking mentions to their potential antecedents. They utilized the policy gradient algorithm to train the model and achieves better results compared with the counterpart neural network model. Narasimhan et al. (2016) introduced a deep Q-learning based slot-filling technique, where the agent’s action is to retrieve or reconcile content from a new document. Xiong et al. (2017) proposed an innovative reinforcement learning framework for learning multi-hop relational paths. Deep reinforcement learning is a natural choice for tasks that require making incremental decisions. By combin577 ing non-linear function approximations with reinforcement learning, the deep reinforcement learning paradigm can integrate vector-space semantic into a robust joint learning and reasoning process. Moreover, by optimizing the policy-based on the reward signal, deep reinforcement learning model relies less on heuristic loss functions that require careful tuning. 5 Conclusion We introduce a deep reinforcement learning framework for Chinese zero pronoun resolution. Our model learns the policy on selecting antecedents in a sequential manner, leveraging effective information provided by the earlier predicted antecedents. This strategy contributes to the predicting for later antecedents, bringing a natural view for the task. Experiments on the benchmark dataset show that our reinforcement learning model achieves an F-score of 67.2% on the test dataset, surpassing all the existed models by a considerable margin. In the future, we plan to explore neural network models for efficaciously resolving anaphoric zero pronoun documents and research on some specific components which might influence the performance of the model, such as the embedding. Meanwhile, we plan to research on the possibility of applying adversarial learning (Goodfellow et al., 2014) to generate better rewards than the human-defined reward functions. Besides, to deal with the problematic scenario when ground truth parse tree and anaphoric zero pronoun are unavailable, we are interested in exploring the neural network model that integrates the anaphoric zero pronoun determination and anaphoric zero pronoun resolution jointly in a hierarchical architecture without using parser or anaphoric zero pronoun detector. Our code is available at https://github. com/qyyin/Reinforce4ZP.git. Acknowledgments Thank the anonymous reviewers for their valuable comments. This work was supported by the Major State Basic Research Development 973 Program of China (No.2014CB340503), National Natural Science Foundation of China (No.61472105 and No.61502120). According to the meaning by Harbin Institute of Technology, the contact author of this paper is Ting Liu. References SRK Branavan, David Silver, and Regina Barzilay. 2012. Learning to win by reading manuals in a monte-carlo framework. Journal of Artificial Intelligence Research, 43:661–704. Chen Chen and Vincent Ng. 2013. Chinese zero pronoun resolution: Some recent advances. In EMNLP, pages 1360–1365. Chen Chen and Vincent Ng. 2014. Chinese zero pronoun resolution: An unsupervised approach combining ranking and integer linear programming. In Twenty-Eighth AAAI Conference on Artificial Intelligence. Chen Chen and Vincent Ng. 2015. Chinese zero pronoun resolution: A joint unsupervised discourseaware model rivaling state-of-the-art resolvers. In Proceedings of the 53rd Annual Meeting of the ACL and the 7th International Joint Conference on Natural Language Processing (Volume 2: Short Papers), page 320. Chen Chen and Vincent Ng. 2016. Chinese zero pronoun resolution with deep neural networks. In Proceedings of the 54rd Annual Meeting of the ACL. Kevin Clark and Christopher D Manning. 2016. Deep reinforcement learning for mention-ranking coreference models. Proceedings of EMNLP’16. John Duchi, Elad Hazan, and Yoram Singer. 2011. Adaptive subgradient methods for online learning and stochastic optimization. Journal of Machine Learning Research, 12(Jul):2121–2159. Ian Goodfellow, Jean Pouget-Abadie, Mehdi Mirza, Bing Xu, David Warde-Farley, Sherjil Ozair, Aaron Courville, and Yoshua Bengio. 2014. Generative adversarial nets. In Advances in neural information processing systems, pages 2672–2680. Na-Rae Han. 2006. Korean zero pronouns: analysis and resolution. Ph.D. thesis, Citeseer. Geoffrey E Hinton, Nitish Srivastava, Alex Krizhevsky, Ilya Sutskever, and Ruslan R Salakhutdinov. 2012. Improving neural networks by preventing coadaptation of feature detectors. arXiv preprint arXiv:1207.0580. Ryu Iida, Kentaro Inui, and Yuji Matsumoto. 2006. Exploiting syntactic patterns as clues in zero-anaphora resolution. In Proceedings of the 21st International Conference on Computational Linguistics and the 44th annual meeting of the Association for Computational Linguistics, pages 625–632. Association for Computational Linguistics. Ryu Iida, Kentaro Inui, and Yuji Matsumoto. 2007. Zero-anaphora resolution by learning rich syntactic pattern features. ACM Transactions on Asian Language Information Processing (TALIP), 6(4):1. 578 Ryu Iida and Massimo Poesio. 2011. A cross-lingual ilp solution to zero anaphora resolution. In Proceedings of the 49th Annual Meeting of the Association for Computational Linguistics: Human Language Technologies-Volume 1, pages 804–813. Association for Computational Linguistics. Ryu Iida, Kentaro Torisawa, Chikara Hashimoto, JongHoon Oh, and Julien Kloetzer. 2015. Intrasentential zero anaphora resolution using subject sharing recognition. Proceedings of EMNLP’15, pages 2179–2189. Ryu Iida, Kentaro Torisawa, Jong-Hoon Oh, Canasai Kruengkrai, and Julien Kloetzer. 2016. Intrasentential subject zero anaphora resolution using multi-column convolutional neural network. In Proceedings of EMNLP. Hideki Isozaki and Tsutomu Hirao. 2003. Japanese zero pronoun resolution based on ranking rules and machine learning. In Proceedings of the 2003 conference on Empirical methods in natural language processing, pages 184–191. Association for Computational Linguistics. Fang Kong and Guodong Zhou. 2010. A tree kernelbased unified framework for chinese zero anaphora resolution. In Proceedings of the 2010 Conference on Empirical Methods in Natural Language Processing, pages 882–891. Association for Computational Linguistics. Jiwei Li, Will Monroe, Alan Ritter, Dan Jurafsky, Michel Galley, and Jianfeng Gao. 2016. Deep reinforcement learning for dialogue generation. In Proceedings of the 2016 Conference on Empirical Methods in Natural Language Processing, pages 1192– 1202. Ting Liu, Yiming Cui, Qingyu Yin, Shijin Wang, Weinan Zhang, and Guoping Hu. 2017. Generating and exploiting large-scale pseudo training data for zero pronoun resolution. In ACL. Volodymyr Mnih, Koray Kavukcuoglu, David Silver, Alex Graves, Ioannis Antonoglou, Daan Wierstra, and Martin Riedmiller. 2013. Playing atari with deep reinforcement learning. NIPS. Karthik Narasimhan, Tejas Kulkarni, and Regina Barzilay. 2015. Language understanding for textbased games using deep reinforcement learning. EMNLP’15. Karthik Narasimhan, Adam Yala, and Regina Barzilay. 2016. Improving information extraction by acquiring external evidence with reinforcement learning. In Proceedings of the 2016 Conference on Empirical Methods in Natural Language Processing, pages 2355–2365. Nils Reimers and Iryna Gurevych. 2017. Reporting score distributions makes a difference: Performance study of lstm-networks for sequence tagging. In Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing, pages 338–348. Ryohei Sasano and Sadao Kurohashi. 2011. A discriminative approach to japanese zero anaphora resolution with large-scale lexicalized case frames. In IJCNLP, pages 758–766. Ronald J Williams. 1992. Simple statistical gradientfollowing algorithms for connectionist reinforcement learning. Machine learning, 8(3-4):229–256. Wenhan Xiong, Thien Hoang, and William Yang Wang. 2017. Deeppath: A reinforcement learning method for knowledge graph reasoning. Proceedings of the 2016 Conference on Empirical Methods in Natural Language Processing. Qingyu Yin, Yu Zhang, Weinan Zhang, and Ting Liu. 2017a. Chinese zero pronoun resolution with deep memory network. In Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing, pages 1309–1318. Qingyu Yin, Yu Zhang, Weinan Zhang, and Ting Liu. 2017b. A deep neural network for chinese zero pronoun resolution. In IJCAI. Shanheng Zhao and Hwee Tou Ng. 2007. Identification and resolution of chinese zero pronouns: A machine learning approach. In EMNLP-CoNLL, volume 2007, pages 541–550.
2018
53
Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Long Papers), pages 579–589 Melbourne, Australia, July 15 - 20, 2018. c⃝2018 Association for Computational Linguistics 579 Entity-Centric Joint Modeling of Japanese Coreference Resolution and Predicate Argument Structure Analysis Tomohide Shibata†‡ and Sadao Kurohashi†‡ †Graduate School of Informatics, Kyoto University Yoshida-honmachi, Sakyo-ku, Kyoto, 606-8501, Japan ‡CREST, JST 4-1-8, Honcho, Kawaguchi-shi, Saitama, 332-0012, Japan {shibata, kuro}@i.kyoto-u.ac.jp Abstract Predicate argument structure analysis is a task of identifying structured events. To improve this field, we need to identify a salient entity, which cannot be identified without performing coreference resolution and predicate argument structure analysis simultaneously. This paper presents an entity-centric joint model for Japanese coreference resolution and predicate argument structure analysis. Each entity is assigned an embedding, and when the result of both analyses refers to an entity, the entity embedding is updated. The analyses take the entity embedding into consideration to access the global information of entities. Our experimental results demonstrate the proposed method can improve the performance of the intersentential zero anaphora resolution drastically, which is a notoriously difficult task in predicate argument structure analysis. 1 Introduction Natural language often conveys a sequence of events like “who did what to whom”, and extracting structured events from the raw text is a kind of touchstone for machine reading. This is realized by a combination of coreference resolution (called CR, hereafter) and predicate argument structure analysis (called PA, hereafter). The characteristics and difficulties in the analyses vary among languages. In English, there are few omissions of arguments, and thus PA is relatively easy, around 83% accuracy (He et al., 2017), while CR is relatively difficult, around 70% accuracy (Lee et al., 2017). On the other hand, in Japanese and Chinese, where arguments are often omitted, PA is a difficult task, and even state-of-the-art systems only achieve around 50% accuracy. Zero anaphora resolution (ZAR) is a difficult subtask of PA, detecting a zero pronoun and identifying a referent of the zero pronoun. As the following example shows, CR in English (identifying the antecedent of it) and ZAR in Japanese (identifying the omitted nominative argument) are similar problems. (1) a. John bought a car last month. It was made by Toyota. b. ジョンは John-TOP 先月 last month 車を a car-ACC 買った。 bought. (φが) (ϕ-NOM) トヨタ製だった。 Toyota made-COPULA. Note that CR such as the relation between “the company” and “Toyota” is also difficult in Japanese. According to the argument position relative to the predicate, ZAR is classified into the following three types: • intra-sentential (intra in short): an argument is located in the same sentence with the predicate • inter-sentential (inter in short): an argument is located in the preceding sentences, such as “車” for “トヨタ製だった” (Toyota madeCOPULA) in sentence (1b) • exophora: an argument does not appear in a document, such as author and reader Among these three types, the analysis of inter is extremely difficult because there are many candidates in preceding sentences, and clues such as a dependency path between a predicate and an argument cannot be used. This paper presents a joint model of CR and PA in Japanese. It is necessary to perform them together because PA (especially inter-sentential 580 !"#$%&'&(&&&)*+&&&,-&.&(&+/&0&1&&&&&234&&&56,&78&&&9:;&<&&&&&&=>&<?@&A !"#$%&"#'()*'+,-A "./0$%A 12132)'4,-567'8,+'39:A ;9<<0$=A 4-'>)"1A )9=?>")'$=@A 32?2%2/:2@'-7A+A B2%:<0=A B#CDE&&&FGH&I&&&&&&&&9J&K&&LM&<N&/?&@A C)2<0@2=:'DE8A <0@2'744A <9CC"):'-7A+A !"#$"!! !"#$%'? ,-? 234A 2=F:&?39G2)A OOOA #%&'(!A H!"#$%&"#'()*I?H12132)I? H;9<<0$=I? P&'&(&&&&&&&&=>&Q.&&&&&&&&&&J????J???&&&&&&&&&RS&@A <$12'()*'+,-A 2%2/F"='3&A </K"%")A 8,(A 8,(A 8,(A J????JA ()*?!"#$%&"#?L$<?=":?$=?"./0$%?12132)M?39:?)9=? >")?$=?2%2/F"=?>)"1?;9<<0$=?4-M?$=@?L$<?2%2/:2@*? N2?L$<?$?</K"%")?J?3&?:K2?2%2/F"=*? N2?<9CC"):2@?$?<0@2?">?-)2<0@2=:?B2%:<0=*A /")2>2)2=/2?)2<"%9F"=?A C)2@0/$:2?$)O912=:? <:)9/:9)2?$=$%&<0<A :)$=<%$F"=A Figure 1: An overview of our proposed method. The phrases with red represent a predicate. ZAR) needs to identify salient entities, which cannot be identified without performing CR and PA simultaneously. Our results support this claim, and suggest that the status quo of PA-exclusive research in Japanese is an insufficient approach. Our work is inspired by (Wiseman et al., 2016), which described an English CR system, where entities are represented by embeddings, and they are updated by CR results dynamically. We perform Japanese CR and PA by extending this idea. Our experimental results demonstrate the proposed method can improve the performance of the inter-sentential zero anaphora resolution drastically. 2 Related Work Predicate Argument Structure Analysis. Early studies have handled both intra- and intersentential anaphora (Taira et al., 2008; Sasano and Kurohashi, 2011), and Hangyo et al. (2013) present a method for handling exophora. Recent studies, however, focus on only intra-sentential anaphora (Ouchi et al., 2015; Shibata et al., 2016; Iida et al., 2016; Ouchi et al., 2017; Matsubayashi and Inui, 2017), because the analysis of intersentential anaphora is extremely difficult. Neural network-based approaches (Shibata et al., 2016; Iida et al., 2016; Ouchi et al., 2017; Matsubayashi and Inui, 2017) have improved its performance. Although most of studies did not consider the notion entity, Sasano and Kurohashi (2011) consider an entity, and its salience score is calculated based on simple rules. However, they used gold coreference links to form the entities, and reported the salience score did not improve the performance. In contrast, we perform CR automatically, and capture the entity salience by using RNNs. For Chinese, where zero anaphors are often used, neural network-based approaches (Chen and Ng, 2016; Yin et al., 2017) outperformed conventional machine learning approaches (Zhao and Ng, 2007). Coreference Resolution. CR has been actively studied in English and Chinese. Neural networkbased approaches (Wiseman et al., 2016; Clark and Manning, 2016b,a; Lee et al., 2017) outperformed conventional machine learning approaches (Clark and Manning, 2015). Wiseman et al. (2016) and Clark and Manning (2016b) learn an entity representation and integrate this into a mentionbased model. Our work is inspired by Wiseman et al. (2016), which learn the entity representation by using Recurrent Neural Networks (RNNs). Clark and Manning (2016b) adopt a clustering approach for the entity representation. The reason why we do not use this is that if we take a clustering approach in our setting, zero pronouns need to be first identified before clustering, and thus, it is hard to perform CR and PA jointly. Lee et al. (2017) take an end-to-end approach, aiming at not relying on hand-engineering mention detector (consider all spans as potential mentions). In used Japanese evaluation corpora, since the basic unit for the annotations and our analyses (CR and PA) is fixed, we do not need consider all spans. In Japanese, CR has not been actively studied other than Iida et al. (2003); Sasano et al. (2007) 581 since the use of zero pronouns is more common and problematic. Semantic Role Labeling. Japanese PA is similar to Semantic Role Labeling (SRL) in English. Neural network-based approaches have improved the performance (Zhou and Xu, 2015; He et al., 2017). In these approaches, an appropriate argument for a predicate is searched among mentions in a text. The notion entity is not considered. Other Entity-Centric Study. There are several studies that consider the notion entity in other areas: text comprehension (Kobayashi et al., 2016; Henaff et al., 2016) and language modeling (Ji et al., 2017). 3 Japanese Preliminaries Before presenting our proposed method, we describe the basics of Japanese predicate argument structure and its analysis. Since the word order is relatively free among arguments in Japanese, an argument is followed by a case marking postposition. The postpositions が(ga), を(wo), and に(ni) indicate nominative (NOM), accusative (ACC) and dative (DAT), respectively. In the double nominative construction such as “私が英語が上手だ” (My English is good), “英語” (English) is regarded as NOM, and “私” (I), the outer nominative is regarded as NOM2. This paper targets these four cases. PA is tightly related to a dependency structure of a sentence. Considering the relation between a predicate and its argument, and a necessary analysis can be classified into the following three categories (see example sentence (2) below). (2) ジョンは買った パンを 食べた John-TOP bought bread–ACC ate. D D D Overt case: When an argument with a case marking postposition has a dependency relation with a predicate, PA is not necessary. In example (2), since “パンを” (bread-ACC) has a dependency relation with “食べた” (ate), it is obvious that “食 べた” takes “パン” as its ACC argument. Case analysis: When a topic marker は(wa) is attached to an argument, the case marking postposition disappears, and the analysis of identifying the case role becomes necessary. The analysis is called case analysis. In the example, although “ジ ョンは” (John-TOP) has a dependency relation with “食べた” (ate), the analysis of identifying NOM is necessary. The same phenomenon happens when a relative clause is used. When an argument is modified by a relative clause, we do not know its case role to the predicate in the relative clause. In the example, although “パン” has a dependency relation with “買った” (bought), the analysis of identifying ACC is necessary. Zero anaphora resolution (ZAR): Some arguments are not included in the phrases with which a predicate has a dependency relation. While pronouns are mostly used in English, they are rarely used in Japanese. This phenomenon is called zero anaphora, and the analysis of identifying an argument (referent of the zero pronoun) is called zero anaphora resolution (ZAR). In the example, although “買った” takes “ジョン” as its NOM argument, they do not have a dependency relation, and thus zero anaphora resolution is necessary. When dependency relations are identified by parsing, what Japanese PA has to do is case analysis and zero anaphora resolution. Each predicate has a set of required cases, but not all the four cases. For example, “買う” (buy) takes NOM and ACC, but neither DAT nor NOM2. PA for “買う” in sentence (2) has to find John as NOM, but also has to judge that it does not take DAT and NOM2 arguments. Another difficulty lies in that a predicate takes a case, but in a sentence it does not take a specific argument. For example, in the sentence “it is difficult to bake a bread”, NOM of “bake” is not a specific person, but means “anyone” or “in general”. In such cases, PA has to regard arguments as unspecified. 4 Overview of Our Proposed Method An overview of our proposed model is described with a motivated example (Figure 1). Our model equips an entity buffer for entity management. At first, it contains only special entities, author and reader. In Japanese CR and PA, a basic phrase, which consists of one content word and zero or more function words, is adopted as a basic unit. When an input text is given, the contextual representations of basic phrases are obtained by using Convolutional Neural Network (CNN) and Bidirectional LSTM. Then, from the beginning of the text, CR is performed if a target phrase is a noun phrase, and PA is performed if a target phrase is a predicate phrase. Both of these analyses take 582 into consideration not only the mentions in the text but also the entities in the entity buffer. In CR, when a mention refers to an existing entity, the entity embedding in the entity buffer is updated. In Figure 1, “同氏” (said person) is analyzed to refer to “コワリョフ氏” (Mr.Kovalyov), and the entity embedding of “コワリョフ氏” is updated. When a mention is analyzed to have no antecedent, it is registered to the entity buffer as a new entity. In PA, when a predicate has no argument for any case, its argument is searched among any mentions in the text, author and reader. In the same way as CR, PA takes into consideration not only the mentions but also entities in the entity buffer, and updates the entity embedding. In Figure 1, the predicate “立候補し” (run for) has no NOM argument. Our method finds “コワ リョフ氏” as its NOM argument, and then updates its entity embedding. As mentioned before, the entity embedding of “コワリョフ氏” is updated by the coreference relation with “同氏” in the second sentence. In the third sentence, the predicate “支持していた” (support) has also no NOM argument, and “コワリョフ氏” is identified as its NOM argument, because the frequent reference implies its salience. 5 Base Model 5.1 Input Encoding Conventional machine learning techniques have extracted features from a basic phrase, which require much effort on feature engineering. Our method obtains an embedding of each basic phrase using CNN and bi-LSTM as shown in Figure 2. Suppose the i-th basic phrase bpi consists of |bpi| words. First, the embedding of each word is represented as a concatenation of word (lemma), part of speech (POS), sub-POS and conjugation embeddings. We append start-of-phrase and endof-phrase special words to each phrase in order to better represent prefixes and suffixes. Let W i ∈ Rd×(|bpi|+2) be an embedding matrix for bpi where d denotes the dimension of word representation. The embedding of the basic phrase is obtained by applying CNN to the sequence of words. A feature map f i is obtained by applying a convolution between W i and a filter H of width n. The m-th element of f i is obtained as follows: f i[m] = tanh(⟨W i[∗, m : m + n −1], H⟩), (1) !"#$%&'& !"#$ %&&'()*& +&,-& ./0& xi hi 12&3"'4&35& 16/.5& H W i f i[1] !"#$"! %&'! (& 17,85& 9$& :& ;& <$& =$& >& :?*,"!& ;?*,"!& >& >& Figure 2: Basic phrase embedding obtained with CNN and Bi-LSTM. where W i[∗, m : m+n−1] denotes the m-to-(m+ n −1)-th column of W i, and ⟨A, B⟩= Tr(ABT) is the Frobenius inner product. Then, to capture the most important feature for a given filter in bpi, the max pooling is applied as follows: xi = max m f i[m]. (2) The process described so far is for one filter. The multiple filters of varying widths are applied to obtain the representation of bpi. When we set h filters, xi, the embedding of the i-th basic phrase, is represented as [xi 1, · · · , xi h]. The embeddings of basic phrases are read by biLSTM to capture their context as follows: −→ h i = −−−−→ LSTM(xi, −→ h i−1), ←− h i = ←−−−− LSTM(xi, ←− h i+1), (3) and the contextualized embedding of the i-th basic phrase is represented as a concatenation of the hidden layers of forward and backward LSTM. hi = [−→ h i; ←− h i] (4) This process is performed for each sentence. Since CR and PA are performed for a whole document D, the indices of basic phrases are reassigned from the beginning to the end of D in a consecutive order: D = {h1, h2, · · · , hi, · · · }. To handle exophora, author and reader are assigned a unique trainable embedding, respectively. 583 5.2 Coreference Resolution We adopt a mention-ranking model that assigns each mention its highest scoring candidate antecedent. This model assigns a score sm CR(ant, mi) to a target mention mi and its candidate antecedent ant1. The candidate antecedents include i) mentions preceding mi, ii) author and reader, and iii) NACR (no antecedent). sm CR(ant, mi) is calculated as follows: sm CR(ant, mi) = W CR 2 ReLU(W CR 1 vCR input), (5) where W CR 1 and W CR 2 are weight matrices, and vCR input is an input vector, a concatenation of the following vectors: • embeddings of mi and ant • exact match or partial match between strings of mi and ant • sentence distance between mi and ant. The distance is binned into one of the buckets [0, 1, 2, 3+]. • whether a pair of mi and ant has an entry in a synonym dictionary. When a candidate antecedent is NACR, the input vector is just the embedding of a target mention mi, and the same neural network with different weight matrices calculates a score. The following margin objective is trained: LCR = Nm ∑ i max ant∈AN T (mi)(1+sm CR(ant, mi)−sm CR(ˆti, mi)), (6) where Nm denotes the number of mentions in a document, ANT (mi) denotes the set of candidate antecedents of mi, and ˆti denotes the highest scoring true antecedent of mi defined as follows: ˆti = argmax ant∈T (mi) sm CR(ant, mi), (7) where T (mi) denotes the set of true antecedents of mi. 5.3 Predicate Argument Structure Analysis When a target phrase is a predicate phrase, PA is performed. For each case of a predicate, PA searches an appropriate argument among candidate arguments: i) basic phrases located in the sentence including the predicate and preceding sentences, ii) author and reader, iii) unspecified, and 1The superscript m of sm CR(ant, mi) represents a mention-based score, which contrasts with an entity-based score introduced in Section 6. !"#$%&'(#!'")*+#,(!'(.#+/#$$%,)! 0#1#&23,'1-!"#4#"#,&#50#,(#,&#-$%0(',&#! "#$%!&'()*+! "#$! %! !! ,-! ...! &'()*!+! 6738'1938:!6;"<:! 6+#+/#":! 6"*,-43":!6',$:! sm PA(arg, mi, c) vPA input W PA 1,c W PA 2 Figure 3: A neural network for PA. iv) NAPA which means the predicate takes no argument of for the case. The probability that the predicate mi takes an argument arg for case c is defined as follows: P(c = arg|mi) = exp(sm PA(arg, mi, c)) ∑ carg∈ ARG(mi) exp(sm PA(carg, mi, c)), (8) where ARG(mi) denotes the set of candidate arguments of mi, and a score sm PA(arg, mi, c) is calculated by a neural network as follows (Figure 3): sm PA(arg, mi, c) = W PA 2 tanh(W PA 1,c vPA input), (9) where W PA 1,c , W PA 2 are weight matrices, and vPA input is an input vector, a concatenation of the following vectors: • embeddings of mi and arg2 • path embedding: the dependency path between a predicate and an argument is an important clue. Roth and Lapata (2016) learn a representation of a lexicalized dependency path for SRL. An LSTM reads words3 from an argument to a predicate along with a dependency path, and the final hidden state is adopted as the embedding of the dependency path.4 For case analysis, the direct dependency relation between a predicate and its argument can be represented as the path embedding. 2An embedding for NAPA is assigned a trainable one. 3We add special words {Parent, Child}, which indicate a dependency direction between basic phrases. 4When an argument is an inter or exophora, the path embedding is set to be a zero vector. 584 • selectional preference: selectional preference is another important clue for PA. A selectional preference score is learned in an unsupervised manner from automatic parses of a raw corpus (Shibata et al., 2016). • sentence distance between mi and arg. The distance is binned in the same way as CR. The objective is to minimize the cross entropy between predicted and true distributions: LPA = − Np ∑ i ∑ c log P(c = d arg|pi), (10) where Np denotes the number of predicates in a document, and d arg denotes a true argument. 6 Entity-Centric Model While the base model performs mention-based CR and PA, our proposed model performs entity-based analyses as shown in Figure 1. 6.1 Entity Embedding Update The entity embeddings are managed in an entity buffer. First, let us introduce time stamp i for the entity embedding update. Time i corresponds to the analysis for the i-th basic phrase in a document. If an entity is referred to by the analysis, its embedding is updated. Let e(k) i be the embedding of an entity k at time i (after the entity embedding is updated). In CR, following Wiseman et al. (2016), when a target phrase mi refers to the entity k, e(k) i is updated as follows: e(k) i ←LSTMe(hi, e(k) i−1) (11) where LSTMe denotes an LSTM for the entity embedding update. When an antecedent is NACR, a new entity embedding is set up, initialized by a zero vector. The entity buffer maintains K LSTMs (K is the number of entities in a document), and their parameters are shared. The proposed method updates the entity embedding not only in CR but also in PA. When the referent of a zero pronoun of case c of predicate pi is entity k, the entity embedding is updated by using the predicate embedding hi multiplied by a weight matrix Wc for case c as follows: e(k) i ←LSTMe(Wchi, e(k) i−1). (12) In both CR and PA, the embeddings of entities other than the referred entity k are not updated (e(l) i ←e(l) i−1(l ̸= k)). 6.2 Use of Entity Embedding in CR and PA Both CR and PA are allowed to take the entity embeddings into consideration. In CR, let zant denote the id of an entity to which the candidate antecedent ant belongs. The entity-based score se CR is calculated as follows: se CR(ant, mi) = { hT i e(zant) i−1 (ant ̸=NACR) gNA(mi) (ant =NACR). (13) The intuition behind the first case is that the dotproduct of hi, the embedding of the target mention, and e(zant) i−1 , the embedding of the entity that ant belongs to indicates the plausibility of their coreference. gNA(mi) is defined as follows: gNA(mi) = qT tanh(WNA [ hi ∑ k ei−1(k) ] ), (14) where q is a weight vector, and WNA is a weight matrix. The intuition is that whether a target phrase is NACR can be judged from hi, the embedding of the target mention itself, and the sum of all the current entity embeddings. se CR is added to sm CR, and the training objective is the same as the one described in Section 5.2. In PA, the entity embedding corresponding to a candidate argument arg5 is just added to the input vector vPA input described in Section 5.3, and mention- and entity-based score sm+e PA (arg, mi, c) is calculated in the same way as sm PA(arg, mi, c). The training objective is again the same as the one in Section 5.3. In Wiseman et al. (2016), the oracle entity assignment is used for the entity embedding update in training, and the system output is used in a greedy manner in testing. Since the performance of PA is lower than that of English CR, there might be a more significant gap between training and testing. Therefore, scheduled sampling (Bengio et al., 2015) is adopted to bridge the gap: in training, the oracle entity assignment is used with probability ϵt (at the t-th iteration) and the system output otherwise. Exponential decay is used: ϵt = kt (we set k = 0.75 for our experiments). 7 Experiments 7.1 Experimental Setting The two kinds of evaluation sets were used for our experiments. One is the KWDLC (Kyoto Uni5When arg is NAPA, the entity embedding is set to a zero vector. 585 versity Web Document Leads Corpus) evaluation set (Hangyo et al., 2012), and the other is Kyoto Corpus. KWDLC consists of the first three sentences of 5,000 Web documents (15,000 sentences) and Kyoto Corpus consists of 550 News documents (5,000 sentences). Word segmentations, POSs, dependencies, PASs, and coreferences were manually annotated (the closest referents and antecedents were annotated for zero anaphora and coreferences, respectively). Since we want to focus on the accuracy of CR and PA, gold segmentations, POSs, and dependencies were used. KWDLC (Web) was divided into 3,694 documents (11,558 sents.) for training, 512 documents (1,585 sents.) for development, and 700 documents (2,195 sents.) for testing; Kyoto Corpus (News) was divided into 360 documents (3,210 sents.) for training, 98 documents (971 sents.) for development, and 100 documents (967 sents.) for testing. The evaluation measure is an F-measure, and the evaluation of both CR and PA was relaxed using a gold coreference chain, which leads to an entity-based evaluation. We did not use the conventional CR evaluation measures (MUC, B3, CEAF and CoNLL) because our F-measure is almost the same as MUC, which is a link-based measure, and the other measures considering singletons get excessively high values6, and thus they do not accord with the actual performance in our setting.7 7.2 Implementation Detail The dimension of word embeddings was set to 100, and the word embeddings were initialized with pre-trained embeddings by Skip-gram with a negative sampling (Mikolov et al., 2013) on a Japanese Web corpus consisting of 100M sentences. The dimension of POS, sub-POS and conjugation were set to 10, respectively, and these embeddings were initialized randomly. The dimensions of the hidden layer in all the neural networks were set to 100. We used filter windows of 1,2,3 with 33 feature maps each for basic phrase CNN. 6In Japanese, since zero pronouns are often used, there are many singletons. In example sentences (1) of the Introduction section, while “a car” and “It” form one cluster in English sentences (1-a), “a car” is a singleton in Japanese sentences (1-b) because a zero pronoun is used in the second sentence. 7For the Web evaluation set, the F-measure of our proposed method is 0.685, and the conventional evaluation measures are as follows; MUC: 69.1, B3: 97.2, CEAF: 95.7, and CoNLL: 87.3. Adam (Kingma and Ba, 2014) was adopted as the optimizer. F measures were averaged over four runs. Checkpoint ensemble (Chen et al., 2017) was adopted, where the k best models were taken in terms of validation score, and then the parameters from the k models were averaged for testing. This method requires only one training process. In our experiments, k was set to 5, and the maximum number of epochs was set to 10. We used a single-layer bi-LSTM for the input encoding (Section 5.1); preliminary experiments with stacked stacked bi-directional LSTM with residual connections were not favorable. Although we tried to use the character-level embedding of each word obtained with CNN, as the same way in the basic phrase embedding from the word sequences, the performance was not improved. The synonym dictionary used for CR (Section 5.2) was constructed from an ordinary dictionary and Web corpus, and has about 7,300 entries (Sasano et al., 2007). 7.3 Experimental Result The following three methods were compared: • Baseline: the method described in Section 5. • “+entity (CR)”: this method corresponds to (Wiseman et al., 2016). Entity embedding is updated based on the CR result, and CR takes the entity embedding into consideration. • “+entity (CR,PA)” (proposed method): entity embedding is updated based on PA as well as CR result, and the CR and PA take the entity embedding into consideration. The performance of CR and PA (case analysis and zero anaphora resolution (ZAR)) is shown in Table 1. The performance of CR and case analysis was almost the same for all the methods. For ZAR, “+entity (CR,PA)” improved the performance drastically. CR surely benefits from the entity salience. Since entity embeddings are updated based on system outputs, its performance matters. The performance of Japanese CR is lower than that of English CR. Therefore, we assume there are improved/worsen examples, and our CR performance did not improve significantly. The performance of ZAR also matters. However, the performance of ZAR in our baseline model is extremely low, and thus there are few worsen examples and 586 Web News method coreference resolution case analysis zero anaphora resolution (ZAR) coreference resolution case analysis zero anaphora resolution (ZAR) Baseline 0.661 0.887 0.516 0.543 0.896 0.278 +entity (CR) 0.666 0.890 0.518 0.539 0.894 0.275 +entity (CR,PA) 0.685 0.892 0.581 0.541 0.895 0.356 Table 1: Performance (F-measure) of coreference resolution, case analysis and zero anaphora resolution. Web News case method case analysis zero anaphora resolution (ZAR) case analysis zero anaphora resolution (ZAR) all intra inter exophora all intra inter exophora NOM Baseline 0.942 0.575 0.466 0.083 0.695 0.939 0.316 0.455 0.042 0.261 +entity (CR) 0.945 0.579 0.475 0.117 0.693 0.940 0.315 0.452 0.037 0.239 +entity (CR,PA) 0.945 0.646 0.508 0.502 0.721 0.940 0.390 0.486 0.256 0.357 # of arguments (1,461) (2,009) (338) (393) (1,278) (905) (1,016) (451) (388) (177) ACC Baseline 0.853 0.268 0.368 0.119 0.000 0.679 0.053 0.093 0.000 0.000 +entity (CR) 0.855 0.254 0.357 0.108 0.000 0.631 0.025 0.048 0.000 0.000 +entity (CR,PA) 0.857 0.343 0.413 0.282 0.000 0.651 0.016 0.028 0.000 0.000 # of arguments (299) (224) (106) (105) (13) (105) (97) (41) (56) (0) DAT Baseline 0.498 0.432 0.115 0.016 0.581 0.308 0.183 0.039 0.000 0.367 +entity (CR) 0.445 0.422 0.119 0.016 0.574 0.223 0.162 0.005 0.000 0.334 +entity (CR,PA) 0.411 0.465 0.133 0.126 0.600 0.292 0.328 0.030 0.005 0.566 # of arguments (101) (576) (86) (149) (341) (26) (286) (82) (89) (115) NOM2 Baseline 0.478 0.216 0.259 0.000 0.245 0.098 0.000 0.000 0.000 0.000 +entity (CR) 0.501 0.212 0.226 0.000 0.257 0.069 0.000 0.000 0.000 0.000 +entity (CR,PA) 0.526 0.283 0.240 0.112 0.341 0.092 0.000 0.000 0.000 0.000 # of arguments (110) (140) (29) (28) (83) (13) (37) (17) (13) (7) all Baseline 0.887 0.516 0.400 0.074 0.654 0.896 0.278 0.394 0.032 0.291 +entity (CR) 0.890 0.518 0.405 0.093 0.654 0.894 0.275 0.396 0.027 0.265 +entity (CR,PA) 0.892 0.581 0.439 0.399 0.681 0.895 0.356 0.417 0.204 0.432 # of arguments (1,971) (2,949) (559) (675) (1,715) (1,049) (1,436) (591) (546) (299) Table 2: Performance of case analysis and zero anaphora resolution for each case, and each argument position for zero anaphora resolution. The underlined values indicate the proposed method outperforms the baseline by a large margin. a number of improved examples. Therefore, ZAR can benefit from the entity representation obtained by both CR and PA. Table 2 shows performance of case analysis and zero anaphora resolution for each case, and each argument position. Unspecified was counted for exophora. Both for the News and Web evaluation sets, the performance for inter arguments of zero anaphora resolution, which was extremely difficult in the baseline method, was improved by a large margin by our proposed method. 7.4 Ablation Study To reveal the importance of each clue for CR and PA, each clue was ablated. Table 3 shows the result on the development set. We found that, the path embedding was effective for PA, and the string match was effective for CR. The sentence distance for both CR and PA was effective for News, but not for Web since the Web evaluation corpus consists of three-sentence documents. 7.5 Comparison with Other Work It is difficult to compare the performance of our method with other studies directly because there are no studies handling both CR and PA. The comparisons with other studies are summarized as follows: • Shibata et al. (2016) proposed a neuralnetwork based PA. Their target was intra and exophora for three major cases (NOM, ACC and DAT), and the performance was 0.534 on the same Web corpus as ours. The performance of our proposed method for the same three cases was 0.626. Furthermore, since their model assumes a static PA graph, their model is difficult to be extended to handle CR. • Ouchi et al. (2017) proposed a grid-type RNN model for capturing the multi-predicate interaction. Their target was only intra on the NAIST text corpus (News), and the performance was 47.1. Since the NAIST text 587 coreference resolution zero anaphora resolution (ZAR) Web News Web News F1 ∆ F1 ∆ F1 ∆ F1 ∆ Our proposed model 0.633 0.613 0.512 0.361 CR - string match 0.212 -0.420 0.184 -0.429 0.474 -0.038 0.348 -0.013 - sentence distance 0.643 +0.011 0.588 -0.025 0.505 -0.007 0.343 -0.018 - synonym dictionary 0.643 +0.010 0.613 0.000 0.510 -0.002 0.348 -0.013 PA - path embedding 0.643 +0.010 0.625 +0.012 0.459 -0.054 0.268 -0.093 - selectional preference 0.638 +0.005 0.316 -0.297 0.507 -0.005 0.173 -0.188 - sentence distance 0.647 +0.014 0.606 -0.007 0.516 +0.004 0.327 -0.034 Table 3: Ablation study on the development set. The cells shaded gray represent they are not directly affected from the ablation, but from the counterpart analysis result. corpus contains a lot of annotation errors as pointed out in Iida et al. (2016), we did not conduct our experiments on the NAIST text corpus. • Iida et al. (2003) reported an F-measure of about 0.7 on News domain. The possible reason why our performance on News (0.541) is lower than theirs is that their basic unit is a compound noun while our basic unit is a noun, and thus our setting is difficult in comparison with theirs. Since we handle inter as well as intra and exophora arguments in PA, together with CR, we can say that our experimental setting is more practical in comparison with other studies. 7.6 Error Analysis In example (3), although the NOM argument of the predicate “通院ですよ!” (go to hospital) is author, our method wrongly classified it as unspecified. (3) 毎日のように every day 通院ですよ! go to hospital! 私自身は I myself-TOP とても very 健康なんですけど。 healthy. ((I) go to hospital every day! (I am) very healthy, though.) In the second sentence, our method correctly identified the antecedent of “私” (I) as author, and the NOM of “健康なんですけど” (healthy) as “私” (I). Our method adopts the greedy search so that it cannot exploit this handy information in the analysis of the first sentence. The global modeling using reinforcement learning (Clark and Manning, 2016a) for a whole document is our future work. In example (4), although the NOM argument of “装飾されています” (be decorated) in the second sentence is “ドレス” (dress) in the first sentence, our method wrongly classified it as NAPA. (4) 大変 very 印象的な impressive ドレスです。 dress-COPULA. オーガンジーの organdie-GEN 上に top-DAT ラインを line-ACC 描くように draw-as 小さな small ビーズで bead-INS 装飾されています。 decorated ((This is) a very impressive dress. (The dress) is decorated by small beads as they draw a line on its organdy.) “オーガンジー” (organdie) has a bridging relation to “ドレス”, which might help capture the salience of “ドレス”. The bridging reference resolution is our next target and must be easily incorporated into our model. 8 Conclusion This paper has presented an entity-centric neural network-based joint model of coreference resolution and predicate argument structure analysis. Each entity has its embedding, and the embeddings are updated according to the result of both of these analyses dynamically. Both of these analyses took the entity embedding into consideration to access the global information of entities. The experimental results demonstrated that the proposed method could improve the performance of the inter-sentential zero anaphora resolution drastically, which has been regarded as a notoriously difficult task. We believe that our proposed method is also effective for other pro-drop languages such as Chinese and Korean. Acknowledgment This work was supported by JST CREST Grant Number JPMJCR1301, Japan. 588 References Samy Bengio, Oriol Vinyals, Navdeep Jaitly, and Noam Shazeer. 2015. Scheduled sampling for sequence prediction with recurrent neural networks. CoRR abs/1506.03099. http://arxiv.org/ abs/1506.03099. Chen Chen and Vincent Ng. 2016. Chinese zero pronoun resolution with deep neural networks. In Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers). Association for Computational Linguistics, Berlin, Germany, pages 778–788. http://www. aclweb.org/anthology/P16-1074. Hugh Chen, Scott Lundberg, and Su-In Lee. 2017. Checkpoint ensembles: Ensemble methods from a single training process. CoRR abs/1710.03282. http://arxiv.org/abs/1710.03282. Kevin Clark and Christopher D. Manning. 2015. Entity-centric coreference resolution with model stacking. In Association for Computational Linguistics (ACL). Kevin Clark and Christopher D. Manning. 2016a. Deep reinforcement learning for mention-ranking coreference models. In Empirical Methods on Natural Language Processing (EMNLP). Kevin Clark and Christopher D. Manning. 2016b. Improving coreference resolution by learning entitylevel distributed representations. In Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers). Association for Computational Linguistics, pages 643–653. https://doi.org/10. 18653/v1/P16-1061. Masatsugu Hangyo, Daisuke Kawahara, and Sadao Kurohashi. 2012. Building a diverse document leads corpus annotated with semantic relations. In Proceedings of the 26th Pacific Asia Conference on Language, Information, and Computation. Faculty of Computer Science, Universitas Indonesia, Bali,Indonesia, pages 535–544. http://www. aclweb.org/anthology/Y12-1058. Masatsugu Hangyo, Daisuke Kawahara, and Sadao Kurohashi. 2013. Japanese zero reference resolution considering exophora and author/reader mentions. In Proceedings of the 2013 Conference on Empirical Methods in Natural Language Processing. Association for Computational Linguistics, Seattle, Washington, USA, pages 924–934. http://www. aclweb.org/anthology/D13-1095. Luheng He, Kenton Lee, Mike Lewis, and Luke Zettlemoyer. 2017. Deep semantic role labeling: What works and what’s next. In Proceedings of the Annual Meeting of the Association for Computational Linguistics. Mikael Henaff, Jason Weston, Arthur Szlam, Antoine Bordes, and Yann LeCun. 2016. Tracking the world state with recurrent entity networks. CoRR abs/1612.03969. http://arxiv.org/ abs/1612.03969. Ryu Iida, Kentaro Inui, Hiroya Takamura, and Yuji Matsumoto. 2003. Incorporating contextual cues in trainable models for coreference resolution. In In Proceedings of the EACL Workshop on The Computational Treatment of Anaphora. pages 23–30. Ryu Iida, Kentaro Torisawa, Jong-Hoon Oh, Canasai Kruengkrai, and Julien Kloetzer. 2016. Intrasentential subject zero anaphora resolution using multi-column convolutional neural network. In Proceedings of the 2016 Conference on Empirical Methods in Natural Language Processing. Association for Computational Linguistics, Austin, Texas, pages 1244–1254. https://aclweb. org/anthology/D16-1132. Yangfeng Ji, Chenhao Tan, Sebastian Martschat, Yejin Choi, and Noah A. Smith. 2017. Dynamic entity representations in neural language models. In Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing. Association for Computational Linguistics, pages 1831–1840. http://www.aclweb.org/ anthology/D17-1195. Diederik P. Kingma and Jimmy Ba. 2014. Adam: A method for stochastic optimization. CoRR abs/1412.6980. http://arxiv.org/abs/ 1412.6980. Sosuke Kobayashi, Ran Tian, Naoaki Okazaki, and Kentaro Inui. 2016. Dynamic entity representation with max-pooling improves machine reading. In Proceedings of the NAACL HLT 2016. Kenton Lee, Luheng He, Mike Lewis, and Luke Zettlemoyer. 2017. End-to-end neural coreference resolution. In Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing. Association for Computational Linguistics, pages 188–197. http://www.aclweb.org/ anthology/D17-1018. Yuichiroh Matsubayashi and Kentaro Inui. 2017. Revisiting the design issues of local models for japanese predicate-argument structure analysis. In Proceedings of the Eighth International Joint Conference on Natural Language Processing (Volume 2: Short Papers). Asian Federation of Natural Language Processing, Taipei, Taiwan, pages 128–133. http://www.aclweb.org/ anthology/I17-2022. Tomas Mikolov, Ilya Sutskever, Kai Chen, Greg Corrado, and Jeffrey Dean. 2013. Distributed representations of words and phrases and their compositionality. In C.J.C. Burges, L. Bottou, M. Welling, Z. Ghahramani, and K.Q. Weinberger, editors, Advances in Neural Information Processing Systems 26, Curran Associates, Inc., pages 3111–3119. 589 Hiroki Ouchi, Hiroyuki Shindo, Kevin Duh, and Yuji Matsumoto. 2015. Joint case argument identification for Japanese predicate argument structure analysis. In Proceedings of the 53rd Annual Meeting of the Association for Computational Linguistics and the 7th International Joint Conference on Natural Language Processing (Volume 1: Long Papers). Association for Computational Linguistics, Beijing, China, pages 961–970. http://www.aclweb. org/anthology/P15-1093. Hiroki Ouchi, Hiroyuki Shindo, and Yuji Matsumoto. 2017. Neural modeling of multi-predicate interactions for Japanese predicate argument structure analysis. In Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers). Association for Computational Linguistics, Vancouver, Canada, pages 1591– 1600. http://aclweb.org/anthology/ P17-1146. Michael Roth and Mirella Lapata. 2016. Neural semantic role labeling with dependency path embeddings. In Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers). Association for Computational Linguistics, Berlin, Germany, pages 1192–1202. http://www.aclweb.org/ anthology/P16-1113. Ryohei Sasano, Daisuke Kawahara, and Sadao Kurohashi. 2007. Improving coreference resolution using bridging reference resolution and automatically acquired synonyms. In DAARC. Ryohei Sasano and Sadao Kurohashi. 2011. A discriminative approach to Japanese zero anaphora resolution with large-scale lexicalized case frames. In Proceedings of 5th International Joint Conference on Natural Language Processing. Asian Federation of Natural Language Processing, Chiang Mai, Thailand, pages 758–766. http://www.aclweb. org/anthology/I11-1085. Tomohide Shibata, Daisuke Kawahara, and Sadao Kurohashi. 2016. Neural network-based model for Japanese predicate argument structure analysis. In Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers). Association for Computational Linguistics, Berlin, Germany, pages 1235–1244. http://www.aclweb.org/ anthology/P16-1117. Hirotoshi Taira, Sanae Fujita, and Masaaki Nagata. 2008. A Japanese predicate argument structure analysis using decision lists. In Proceedings of the 2008 Conference on Empirical Methods in Natural Language Processing. Association for Computational Linguistics, Honolulu, Hawaii, pages 523–532. http://www.aclweb.org/ anthology/D08-1055. Sam Wiseman, Alexander M. Rush, and Stuart M. Shieber. 2016. Learning global features for coreference resolution. In Proceedings of the 2016 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies. Association for Computational Linguistics, San Diego, California, pages 994–1004. http://www.aclweb. org/anthology/N16-1114. Qingyu Yin, Yu Zhang, Weinan Zhang, and Ting Liu. 2017. Chinese zero pronoun resolution with deep memory network. In Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing. Association for Computational Linguistics, Copenhagen, Denmark, pages 1320–1329. https://www.aclweb. org/anthology/D17-1136. Shanheng Zhao and Hwee Tou Ng. 2007. Identification and resolution of Chinese zero pronouns: A machine learning approach. In Proceedings of the 2007 Joint Conference on Empirical Methods in Natural Language Processing and Computational Natural Language Learning (EMNLP-CoNLL). Association for Computational Linguistics, Prague, Czech Republic, pages 541–550. http://www.aclweb. org/anthology/D/D07/D07-1057. Jie Zhou and Wei Xu. 2015. End-to-end learning of semantic role labeling using recurrent neural networks. In Proceedings of the 53rd Annual Meeting of the Association for Computational Linguistics and the 7th International Joint Conference on Natural Language Processing (Volume 1: Long Papers). Association for Computational Linguistics, Beijing, China, pages 1127–1137. http://www. aclweb.org/anthology/P15-1109.
2018
54
Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Long Papers), pages 590–600 Melbourne, Australia, July 15 - 20, 2018. c⃝2018 Association for Computational Linguistics 590 Constraining MGbank: Agreement, L-Selection and Supertagging in Minimalist Grammars John Torr School of Informatics University of Edinburgh 11 Crichton Street, Edinburgh, UK [email protected] Abstract This paper reports on two strategies that have been implemented for improving the efficiency and precision of wide-coverage Minimalist Grammar (MG) parsing. The first extends the formalism presented in Torr and Stabler (2016) with a mechanism for enforcing fine-grained selectional restrictions and agreements. The second is a method for factoring computationally costly null heads out from bottom-up MG parsing; this has the additional benefit of rendering the formalism fully compatible for the first time with highly efficient Markovian supertaggers. These techniques aided in the task of generating MGbank, the first wide-coverage corpus of Minimalist Grammar derivation trees. 1 Introduction Parsers based on deep grammatical formalisms, such as CCG (Steedman and Baldridge, 2011) and HPSG (Pollard and Sag, 1994), exhibit superior performance on certain semantically crucial (unbounded) dependency types when compared to those with relatively shallow context free grammars (in the spirit of Collins (1997) and Charniak (2000)) or, in the case of modern dependency parsers (McDonald and Pereira (2006), Nivre et al. (2006)), no explicit formal grammar at all (Rimell et al. (2009), Nivre et al. (2010)). As parsing technology advances, the importance of correctly analysing these more complex construction types will also inevitably increase, making research into deep parsing technology an important goal within NLP. One deep grammatical framework that has not so far been applied to NLP tasks is the Minimalist Grammar (MG) formalism (Stabler, 1997). Linguistically, MG is a computationally-oriented formalization of many aspects of Chomsky’s (1995) Minimalist Program, arguably still the dominant framework in theoretical syntax, but so far conspicuously absent from NLP conferences. Part of the reason for this has been that until now no Minimalist treebank existed on which to train efficient statistical Minimalist parsers. The Autobank (Torr, 2017) system was designed to address this issue. It provides a GUI for creating a wide-coverage MG together with a module for automatically generating MG trees for the sentences of the Wall Street Journal section of the Penn Treebank (PTB) (Marcus et al., 1993), which it does using an exhaustive bottom-up MG chart parser1. This system has been used to create MGbank, the first wide coverage (precisionoriented) Minimalist Grammar and MG treebank of English, which consists of 1078 hand-crafted MG lexical categories (355 of which are phonetically null) and currently covers approximately half of the WSJ PTB sentences. A problem which arose during its construction was that without any statistical model to constrain the derivation, MG parsing had to be exhaustive, and this presented some significant efficiency challenges once the grammar grew beyond a certain size2, mainly because of the problem of identifying the location and category of phonetically silent heads (equivalent to type-changing unary rules) allowed by the theory. This problem was particularly acute for the MGbank grammar, which makes extensive use of such heads to multiply out the lexicon during pars1The parser is based on Harkema’s (2001) CKY variant. 2As Cramer and Zhang (2010) (who pursue a similar treebanking strategy for HPSG) observe, there is very often considerable tension between the competing goals of efficiency and coverage for deep, hand-written and precision-oriented parsers, which aim not only to provide detailed linguistic analyses for grammatical sentences, but also to reject ungrammatical ones wherever possible. 591 ing. This approach reduces the amount of time needed for manual annotation, and also enables the parser to better generalise to unseen constructions, but it can quickly lead to an explosion in the search space if left unconstrained. This paper provides details on two strategies that were developed for constraining the hypothesis space for wide-coverage MG parsing. The first of these is an implementation of the sorts of selectional restrictions3 standardly used by other formalisms, which allow a head to specify certain fine-grained properties about its arguments. Pesetsky (1991) refers to this type of finegrained selection as l(exical)-selection, in contrast to coarser-grained c(ategory)-selection and semantic s-selection. The same system is also used here to enforce morphosyntactic agreements, such as subject-verb agreement4 and case ‘assignment’. It is simpler and flatter than the structured feature value matrices one finds in formalisms such as HPSG and LFG, which arguably makes it less linguistically plausible. However, it is also considerably easier to read and to annotate, which greatly facilitated the manual treebanking task. The second technique to be presented is a method for extracting a set of complex overt categories from a corpus of MG derivation trees which has the dual effect of factoring computationally costly null heads out from parsing (but not from the resulting parse trees) and rendering MGs fully compatible for the first time with existing supertagging techniques. Supertagging was originally introduced in Bangalore and Joshi (1999) for the Lexicalised Tree Adjoining Grammar (LTAG) formalism (Schabes et al., 1988), and involves applying Markovian part-of-speech tagging techniques to strongly lexicalised tag sets that are much larger and richer than the 45 tags used by the PTB. Because each supertag contains a great deal of information about the syntactic environment of the word it labels, such as its subcategorization frame, supertagging is sometimes referred to as ‘almost parsing’. It has proven highly effective at making CCG (Clark and Curran, 2007; Lewis et al., 2016; Xu, 2016; Wu et al., 2017) parsing in particular efficient enough to support largescale NLP tasks, making it desirable to apply this 3These were briefly introduced in Torr (2017), but are expounded here in much greater depth. 4The approach to agreement adopted here differs in various respects from the operation Agree (Chomsky (2000) (2001)) assumed in current mainstream Minimalism. technique to MGs. However, existing supertaggers can only tag what they can see, presenting a problem for MGs, which include phonetically unpronounced heads. Our extraction algorithm addresses this by anchoring null heads to overt ones within complex LTAG-like supertag categories. The paper is arranged as follows: section 2 gives an informal overview of MGs; section 3 introduces the selectional mechanisms and shows how these are used in MGbank to enforce case ‘assignment’ (3.1), l-selection (3.2) and subjectverb agreement (3.3); section 4 presents the algorithm for extracting supertags from a corpus of MG derivation trees (4.1), gives details of how a standard CKY MG parser can straightforwardly be adapted to make use of these complex tags (4.2), and presents some preliminary supertagging results (4.3) and a discussion of these (4.4); section 5 concludes the paper. 2 Minimalist Grammars For a more detailed and formal account of the MG formalism assumed in this paper, see Torr and Stabler (2016) (henceforth T&S); here we give only an informal overview. MG is introduced in Stabler (1997); it is a strongly lexicalised formalism in which categories are comprised of lists of structure building features ordered from left to right. These features must be checked against each other and deleted during the derivation, except for a single c feature on the complementizer (C) heading the sentence, which survives intact (equivalent to reaching the S root in classical CFG parsing). Features are checked and deleted via the application of a small set of abstract Merge and Move rules. Two simple MG lexical entries are given below (The :: is a type identifier5): him :: d helps :: d= v The structure building features themselves can be categorized into four classes: selector =x/x= features, selectee x features, licensor +y features, and licensee -y features. In a directional MG, such as that presented in T&S, the = symbol on the selector can appear on either side of the x category symbol, and this indicates whether selection is to the left or to the right. For instance, in our toy lexicon helps’s first feature is a d= selector, indicating that it is looking for a DP on its right. Since the 5:: indicates a non-derived item and : a derived one. 592 first feature of him is a d selectee, we can merge these two words to obtain the following VP category, where ✏is the empty string (The reason for the commas separating the left and right dependent string components from the head string component is to allow for subsequent head movement of the latter (see Stabler (2001)): ✏, helps, him : v The strings of the two merged elements have been here been concatenated, but this will not always be the case. In particular, if the selected item has additional features behind its selectee, then it will need to check these in subsequent derivational steps via applications of Move. In that case the two constituents must be kept separate within a single expression following Merge. To illustrate this, we will update the lexicon as follows: him :: d -case helps :: d= +CASE v Merging these two items results in the following expression: ✏, helps, ✏: +CASE v, him : -case The two subconstituents, separated above by the rightmost comma, are referred to as chains; the leftmost chain in any expression is the head of the expression; all other chains are movers. The +CASE licensor on the head chain must now attract a chain within the expression with a matching -case licensee as its first feature to move overtly to its left dependent (specifier) position6. Exactly one moving chain must satisfy this condition, or this expression will be unable to enter into any further operations (if more than one chain has the same licensee feature, it will violate a constraint on MG derivations known as the Shortest Move Constraint (SMC) and automatically be discarded). As this condition is satisfied by just him’s 6Uppercase licensors specify overt movement; lowercase licensors, by contrast, trigger covert movement, where only the features move, not the string (see T&S). Note that the MGbank grammar follows Chomsky’s (2008) suggestion that it is the lexical verb V, rather than the null ‘little v’ head governing it, which checks the object’s features, having inherited the relevant licensors (offline we assume) from v. This unifies the analysis of standard transitives with ECM constructions (Jack expected Mary to help), which in MGbank involve overt raising of the subject of the embedded infinitival clause to spec-VP to check accusative case (object control Jack persuaded Mary to help involves two such movements, the first for theta and the second for case). -case feature, we can perform the unary operation Move on this expression, resulting in the following new, single-chained expression: him, helps, ✏: v We can represent these binary Merge and unary Move operations using the MG derivation tree in fig 1a. Derivation trees such as this are used frequently in work on Stablerian Minimalist Grammars, but they can be deterministically mapped into phrase structure trees like fig 1b7. him, helps, ✏: v ✏, helps, ✏: +CASE v, him : -case ✏, him, ✏: d -case ✏, helps, ✏:: d= +CASE v (a) VP V’ DPi t V helps DPi him (b) Figure 1: An MG Derivation tree for the VP him, helps (a); and its corresponding Xbar phrase structure tree (b). At this stage in the derivation the verb and its object are incorrectly ordered. This will be rectified by subsequent V-to-v head movement placing the verb to the left of its object. To continue this derivation and derive the transitive sentence he helps him, we will expand our lexicon with the following categories, where square brackets indicate a null head and a > diacritic on a selector feature indicates that a variant of Merge is triggered in which the head string of the selected constituent undergoes head movement to the left of the selecting constituent’s head string: he :: d -case [trans] :: >v= =d lv8 [pres] :: lv= +CASE t [decl] :: t= c The full derivation tree and corresponding Xbar phrase structure tree for the sentence are given in fig 2 and fig 3 respectively. 3 Case, L-selection and Agreement 3.1 Case ‘Assignment’ Notice that at present both the nominative and accusative forms of the masculine personal pronoun 7MGbank includes MG derivation tree, MG derived (bare phrase structure) tree, and Xbar tree formats. 8Note that little v is written as lv in MGbank derivation trees because upper vs lowercase letters are used to trigger different rules. In the corresponding MGbank Xbar trees, however, v has been converted to V and lv to v, to make these trees more familiar. 593 ✏, [decl], he [pres] helps [trans] him : c he, [pres], helps [trans] him : t ✏, [pres], helps [trans] him : +CASE t, he : -case ✏, helps [trans], him : lv, he : -case ✏, helps [trans], him : =d lv him, helps, ✏: v ✏, helps, ✏: +CASE v, him : -case ✏, him ✏, :: d -case ✏, helps, ✏:: d= +CASE v ✏, [trans], ✏:: >v= =d lv ✏, he, ✏:: d -case ✏, [pres], ✏:: lv= +CASE t ✏, [decl], ✏:: t= c Figure 2: MG derivation tree for the sentence he helps him. CP TP T’ vP v’ VP V’ DPj t Vk t DPj him v v [trans] Vk helps DPi t T [pres] DPi he C [decl] Figure 3: Xbar phrase structure tree for the sentence he helps him. in our lexicon have the same feature sequence. This means that as well as correctly generating he helps him, our grammar also overgenerates him helps he. One way to solve this would be to split +/-case features into +/-nom and +/-acc. However, many items of category d in English (e.g. the, a, you, there, it) are syncretised (i.e. have the same phonetic form) for nominative vs. accusative case. This solution therefore lacks elegance as it expands the lexicon with duplicate homophonic entries differing in just a single (semantically meaningless) feature. Furthermore, increasing the size of the set k of licensees could adversely impact parsing efficiency, given that the worst case theoretical time complexity of MG chart parsing is known to be n2k+3 (Fowlie and Koller, 2017), where k is the number of moving chains allowed in any single expression by the grammar. Instead, we will retain the single -case licensee feature and introduce NOM and ACC as subcategories, or selectional properties, of this feature. We will also subcategorize licensor features using selectional requirements of the form +X and -X, where X is some selectional property. Positive +X features require the presence of the specified property on the licensee feature being checked, while -X features require its absence. For example, consider the following updated lexical entries, where individual selectional features are separated by the . symbol: him :: d -case{ACC} he :: d -case{NOM} helps :: d= +CASE{+ACC} v{PRES.TRANS} [pres] :: lv{+PRES}= +CASE{+NOM} t{FIN.PRES} [trans] :: >v{+TRANS}= =d lv The +ACC selectional requirement on the V head’s +CASE licensor specifies that the object’s licensee feature must bear an ACC selectional property, while +NOM on the T(ense) head indicates that the subject’s licensee must have a NOM property. For SMC purposes, however, these two different subcategories of -case will still block one another, meaning that k remains unaffected. The reader should satisfy themselves that our grammar now correctly blocks the ungrammatical him helps he. We can now also address the aforementioned syncretism issue without increasing the size of the grammar. To do this, we simply allow features to bear multiple selectional properties from the same paradigm. For example, representing the pronoun it as follows will allow it to appear in either a nominative or an accusative case licensing position: it :: d -case{ACC.NOM} 594 3.2 L-selection As well as constraining Move, selectional restrictions can also constrain Merge. For instance, we can ensure that a subject control verb like want subcategorizes for a to-infinitival CP complement, and thereby avoid overgenerating Jack wants that she help(s), simply by using the following categories for want and that: want :: c{+INF}= v{TRANS} that :: t{+FIN}= c{DECL.FIN} Because that lacks the INF feature required by want, the ungrammatical derivation is blocked. We also need to block *Jack wants she help(s), where the overt C head is omitted. Minimalists assume that finite embedded declaratives lacking an overt C are nevertheless headed by a null C a silent counterpart of that. A complicating factor is that a null complementizer is also assumed to head certain types of embedded infinitivals, including the embedded help clause in Jack wants [CP to help]. Given that these null C heads are (trivially) homophones and that they arguably exist to encode the same illocutionary force9, an elegant approach would be to minimize the size of the lexicon - and hence the grammar - by treating them as one and the same item. On the other hand, using a single null C head syncretised with both FIN and INF will fail to block *Jack wants she help(s). At present both C and T are specified as FIN, suggesting a redundancy. Instead, therefore, we will assume that T, being the locus of tense, is also the sole locus of inherent finiteness, but that C’s selectee may inherit FIN or INF from its TP complement as the derivation proceeds10. Only a null C which inherits INF from a to-TP complement will be selectable by a verb like want, blocking the 9Infinitival complementizers are sometimes assumed to encode irrealis force (see e.g. Radford (2004)) in contrast to that and its null counterpart which encode declarative force. However, the fact that Jack expects her to help is (on one reading) virtually synonymous with Jack expects that she will help suggests that in both cases the C head is encoding the same semantic property, with any subtle difference in meaning attributable to the contents of the Tense (T) head (i.e. to vs. will). Consider also Mary wondered whether to help vs. Mary wondered whether she should help, where the embedded infinitival and finite clauses are both clearly interrogative. 10If Grimshaw (1991) is correct that functional projections like DP, TP and CP are part of extended projections of the N and V heads they most closely c-command, then we should not be surprised to find instances where fine-grained syntactic properties are projected up through these functional layers. ungrammatical *Jack wants she help(s). However, although lacking inherent tense properties, certain C heads continue to bear inherent tense requirements11; for instance, that’s selector will retain its inherent +FIN, identifying it as a finite complementizer. To implement this percolation12 mechanism, we now introduce selectional variables, which we write as x, y, z etc. A variable on a selector or licensor feature will cause all the selectional properties and requirements (but not other variables) contained on the selectee or licensee feature that it checks to be copied onto all other instances of that variable on the selecting or licensing category’s remaining unchecked feature sequence. Consider the following: [trans] :: >v{+TRANS.x}= =d lv{x} [pres] :: lv{+PRES.x}= +CASE{+NOM.x} t{FIN.x} to :: lv{+BARE.x}= t{INF.x} [decl] :: t{x}= c{DECL.x} that :: t{+FIN.x}= c{DECL.x} The [pres] T head has an x variable on its lv= selector feature and this same variable also appears to the right on its +CASE licensor and t selectee; any selectional properties or requirements contained on the lv selectee of its vP complement will thus percolate onto these two features (see fig 4). The x’s on the two C heads will percolate the FIN property from the t selectee of [pres] to the c selectee of [decl], where it can be selected for by a verb like say, but not want, which requires INF (contained on the to T head); this will correctly block *Jack wants (that) she help(s). Although we will not discuss the details here, it is worth noting that the MGbank grammar also uses this same percolation mechanism to capture long distance subcategorization in English subjunctives, thereby allowing Jack demanded that she be there on time while also blocking *Jack demanded that she is there on time. 11The property vs. requirement distinction mirrors Chomsky’s (1995) interpretable vs. uninterpretable one. 12Note that because we are only allowing selectional properties and requirements to percolate, rather than the structure building feature themselves, this system is fundamentally different from that described in Kobele (2005), where it was shown that allowing licensee features to be percolated leads to type 0 MGs. Furthermore, by unifying any multiple instances of the same selectional property or requirement that arise on a structure building feature owing to percolation, we can ensure that the set of MG terminals and non-terminals remains finite and thus that the weak equivalence to MCFG (Michaelis, 1998; Harkema, 2001) is maintained. 595 ✏, [pres], helps [trans] him : +CASE{+NOM.PRES.TRANS.+3SG} t{FIN.PRES.TRANS.+3SG}, he : -case{NOM.3SG} ✏, helps [trans], him : lv{PRES.TRANS.+3SG}, he : -case{NOM.3SG} ✏, [pres], ✏:: lv{+PRES.x}= +CASE{+NOM.x} t{FIN.x} Figure 4: Merge of T with vP with percolation of selectional properties and requirements. 3.3 Subject-Verb Agreement The percolation mechanism introduced above can also be used to capture agreement between the subject and the inflected verb. In Minimalist theory, this agreement is only indirect: the subject actually agrees directly with T when it moves to become the latter’s specifier, having been initially selected for either by V (in the case of non-agent arguments) or by v (in the case of agent subjects see fig 3)13. There is also assumed to be some sort of syntactic agreement (Roberts (2010)) and/or phonetic (Chomsky (2001)) process operating between T and the inflected verb, resulting in any tense/agreement inflectional material generated in T(ense) being suffixed onto the finite verb. In MGbank, tense agreement is enforced between T and the finite verb by percolating a PRES or PAST selectional property from the selectee of the latter up through the tree so that it can be selected for by the [pres] or [past] T head. Subject-verb agreement, meanwhile, is enforced by also placing an agreement selectional 13A reviewer asks why all subjects are not directly selected for by V, suggesting that this appears to be a deviation from semantics, and more generally calls for some explanation of the underlying modelling decisions adopted here (e.g. head movements, case movements, null heads etc) which clearly deviate from the more surface oriented analyses of other formalisms used in NLP. In many cases these decisions rest on decades of research which we cannot hope to summarise here; for good introductions to Minimalism, see Radford (2004) and Hornstein et al. (2005). It is worth noting, however, that the null v head in fig 3 is essentially a valency increasing causative morpheme which ends up suffixed to the main verb (via head movement of the latter), effectively enabling it to take an additional ‘external’ argument. We can therefore view the V-v complex as a single synthetic verbal head, so that just as in a language like Turkish the verb ¨ol meaning ‘to die’ can be transformed from an intransitive to a transitive (meaning ‘to kill’) by appending to it the causative suffix d¨ur, in English a verb like break can be transformed from an intransitive (the window broke) to a transitive (he broke the window) by applying a null version of this morpheme. This cross-linguistic perspective (which makes this formalism potentially very relevant for machine translation) reflects a central goal of Minimalism, which is to show that at a relevant level of abstract representation, all languages share a common syntax (making them easier for children to learn). Most of the analyses adopted here are standard ones from the literature (see e.g. Larson’s (1988) VP Shell Hypothesis, Baker’s (1988) Uniform Theta Assignment Hypothesis, Koopman and Sportiche’s (1991) Verb Phrase Internal Subject Hypothesis, and Chomsky (1995; 2008) on little v). restriction (+3SG, +1PL, -3SG etc) on the finite verb’s selectee, and then percolating this up to the +CASE licensor of the T head. We thus have the following updated entries: him :: d -case{ACC.3SG} he :: d -case{NOM.3SG} helps :: d= +CASE{+ACC} v{+3SG.PRES} The percolation step from little v (lv) to T is shown in fig 4; lv has already inherited PRES and +3SG from V (helps) at this point, and these features now percolate to T’s licensor and selectee14 owing to the x variables; the PRES feature inherited from V by v is selected for by T, enforcing non-local tense agreement between T and V, while the +3SG enforces subject verb agreement15. 4 MG Supertagging The above selectional system restricts the parser’s search space sufficiently well that it is feasible to generate an initial MG treebank for many of the sentences in the PTB, particularly the shorter ones and those longer ones which do not require the full range of null heads to be allowed into the chart16. However, for longer sentences requiring null heads such as extraposers, topicalizers or focalizers, parsing remains impractically slow. In this section we show how computationally costly null heads can be factored out from MG parsing al14Note that selectional requirements are entirely inert on selectee and licensee features while, conversely, selectional properties are inert on selectors and licensors. 15For non-3SG present tense verbs, MGbank uses a -3SG negative selectional requirement; for verbs with more complex paradigms, however, the grammar allows for inclusive disjunctive selectional requirements. For example, the selectee feature of the was form of the verb be bears the feature [+1SG|+3SG], allowing it to take either a first or third singular subject. 16The Autobank parser holds certain costly null heads back from the chart and only introduces these incrementally if it fails to parse the sentence without them. The advantage of this strategy is that it improves efficiency for many sentences, but the disadvantage is that it can also result in correct analyses being bled by incorrect ones. The supertagging approach introduced in this section eliminates this problem, since null heads are now anchored to overt ones as part of complex categories, any of which may freely be assigned by the supertagger. 596 together by anchoring them to overt heads within complex overt categories extracted from this initial treebank. This allows much more of the disambiguation work to be undertaken by a statistical Markovian supertagger17, a strategy which has proven highly effective at rendering CCG parsing in particular efficient enough for large-scale NLP tasks. We also show how a standard CKY MG parser can be adapted to make use of these complex categories, and present some preliminary supertagging results. 4.1 Factoring null heads out from MG parsing Consider again the lexical items which appear along the spine of the clause in fig 2. [decl] :: t= c [pres] :: lv= +CASE t [trans] :: >v= =d lv helps :: d= +CASE v Recall that the null [trans] little v merges with the VP headed by overt helps, while the null [pres] T head merges with the vP, and the null [decl] C with TP. If we view each of these headcomplement merge operations as a link in a chain, then all of these null heads are either directly (in the case of v) or indirectly (in the case of T and C) linked to the overt verb. All of the information represented on V, v, T and C heads in Minimalism is in LTAG represented on a single overt lexical category (known as an initial tree). We can adopt this perspective for Minimalist parsing if we view chains of merges that start with some null head and end with some overt head as constituting complex overt categories. Given a corpus of derivation trees, it is possible to extract all such chains appearing in the corpus, essentially precompiling all of the attested combinations of null heads with their overt anchors into the lexicon. A very simple algorithm for doing this is given below. for each derivation tree ⌧: for each null head ⌘in ⌧: if ⌘is a proform: linkWithGovernor(⌘); else: linkWithHeadOfComplement(⌘); groupLinksIntoSupertags() 17During treebank generation we used the C&C (Clark and Curran, 2007) supertagger retrained to take gold CCGbank categories and words as input and output MGbank supertags. For each derivation tree, we first anchor all null heads either directly or indirectly to some overt head; this is achieved by extracting a set of links, each of which represents one merge operation in the tree. Each link is comprised of the two atomic MG lexical categories that are the arguments to the merge operation along with matching indices indicating which features are checked by the operation. Applying the algorithm to our example sentence would result in the following 3 links: link1: [decl] :: t=1 c, [pres] :: lv= +CASE t1 link2: [pres] :: lv=2 +CASE t, [trans] :: v= =d lv2 link3: [trans] :: v=3 =d lv, helps :: d= +CASE v3 The majority of null heads are simply linked with the head of their complement, the only exception being that null proforms, such as PRO in arbitrary control constructions18 (named [pro-d] in MGbank) and the null verbal heads used for VP ellipsis ([pro-v] in MGbank), are linked to whichever head selects for them (i.e. their governor). Assuming that null proforms are the only null heads appearing at the bottom of any extended projection (ep)19 in the corpus, this ensures that all of the lexical items inside a given supertag are part of the same ep, except for PRO, which is trivially an ep in its own right and must therefore be anchored to the verb that selects it. Note that some atomic overt heads (such as he and him in our example sentence) will not be involved in any links and will therefore form simplex supertags. Once the merge links and unattached overt heads are extracted, the algorithm then groups them together in such a way that any lexical items which are chained together either directly or indirectly by merge links are contained in the same group. Because links are only formed between null heads and their complements (except in the case of the null proform heads), and not between heads and specifiers or adjuncts, each chain ends with the first overt head encountered, so that every (null or overt) head is guaranteed to appear in just one group and each group is guaranteed to contain at most one overt lexical item. The above merge links would form one group, or supertag, represented compactly as follows: 18Other instances of control are treated as cases of Amovement following Boeckx et al. (2010). 19Here, we define the clausal extended projection as running from V up to the closest CP (or TP if CP is absent, as in ECM constructions), and for nominals from N up to the closest PP (or DP if PP is absent). 597 [decl] :: t=1 c [pres] :: lv=2 +CASE t1 [trans] :: v=3 =d lv2 helps :: d= +CASE v3 All of the subcategorization information of the main verb is contained within this supertag, but unlike in the case of LTAG categories, this is not always the case: if an auxiliary verb were present between little vP and TP, for instance, then only little v would be anchored to the main verb, while T and C would be anchored to the structurally higher auxiliary. C is the head triggering A’movements, such as wh-movement and topicalization. A consequence of this is that, although like LTAG (but unlike CCG) A’-movement is lexicalised onto an overt category here, that overt category is often structurally and linearly much closer to the A’-moved element than in LTAG. For instance, in the sentence what did she say that Pete eats for breakfast?, an LTAG would precompile the wh-movement onto the supertag for eats, whereas here the [int] C head licensing this movement would be precompiled onto did. As noted in Kasai et al. (2017), LTAG’s lexicalisation of unbounded A’-movement is one reason why supertagging has proven more difficult to apply successfully to TAG than to CCG, Markovian supertaggers being inherently better at identifying local dependencies. We hope that lexicalising A’movement into a supertag that is linearly closer to the moved item will therefore ultimately prove advantageous. 4.2 Adapting an existing CKY MG parser to use MG supertags The MG supertags can be integrated into an existing CKY MG parser quite straight forwardly as follows: first, for each supertag token assigned to each word in the sentence, we map the indices that indicate which features check each other into globally unique identifiers. This is necessary to ensure that different supertags and different instances of the same supertag assigned to different words are differentiated by the system. Then, whenever one of the constrained features is encountered, the parser ensures that it is only checked against the feature with the matching identifier. The parser otherwise operates as usual except that thousands of potential merge operations are now disallowed, with the result that the search space is drastically reduced (though this of course depends on the number of supertags assigned to each word). One complication concerns the dynamic programming of the chart. In standard CKY MG parsing, as with classical CFG CKY, items with the same category spanning the same substring are combined into a single chart entry during parsing. This prevents the system having to create identical tree fragments multiple times. But the current approach complicates this because many items now have different predetermined futures (i.e. their unchecked features are differentially constrained), and when the system later attempts to reconstruct the trees by following the backpointers, things can become very complicated. We can avoid this issue, however, simply by treating the unique identifiers that were assigned to certain selector features as part of the category. This has the effect of splitting the categories and will, for instance, prevent two single chain categories =d1 d= v and =d2 d= v from being treated as a single chart entry until their =d features have been checked. 4.3 Preliminary Results An LSTM supertagger similar to that in (Lewis et al., 2016) was trained on 13,000 sentences randomly chosen from MGbank, extracting various types of (super)tag from the derivation trees. A further 742 sentences were used for development, and 753 for testing, again randomly chosen. We tried training on just the automatically generated corpus and testing on the hand-crafted trees, but this hurt 1-best performances by 2-4%, no doubt owing to the fact that this hand-crafted set deliberately contains many of the rarer constructions in the Zipfian tail which didn’t make it into the automatically generated corpus20. With more data this effect should lessen. The results for n-best supertagging accuracies are given in table 1. 4.4 Discussion Unsurprisingly, the accuracies improve as the number of tags decreases. The CCGbank data contains by far the least tag types and has the highest performance. However, it is worth noting that the MG supertags contain a lot more information than their CCGbank counterparts, even once A’-movement and selectional restrictions are removed. For example, MGbank encodes all predicate-argument relations directly in the syntax, distinguishing for instance between subject 20There are 831 category types in the automatically generated corpus from a total of 1078 for the entire treebank. 598 rei ab rei-A’ ab-A’ ov ccg |tags| 3087 2087 1883 1181 717 342 1-best 79.1 81.1 83.0 84.2 88.0 92.4 2-best 88.4 90.2 91.1 91.9 95.3 97.1 3-best 91.6 93.5 94.1 94.8 97.1 98.3 10-best 96.4 97.4 97.9 98.2 99.2 99.5 25-best 97.6 98.5 98.9 99.1 99.7 99.7 40-best 98.0 98.7 99.0 99.4 99.8 99.8 Table 1: Accuracies on different MG (super)tag types showing the % of cases where the correct tag appears in the n-best list. The first row gives the number of different (super)tag types in the data; rei(fied) is supertags with all selectional properties and requirements; ab(stract) is supertags with all but 5 of these features removed22; -A’ indicates that null C heads, and [focalizer], [topicalizer], [wh] and [relativizer] heads were not included in the supertags, thereby delexicalising A’-movement and moving the formalism towards CCG; ov(ert) is the (reified) atomic overt tags; ccg is the ccgbank supertags. raising and subject control verbs, and between object raising (ECM) and object control verbs, whereas CCGbank itself does not. For a fairer comparison, therefore, we would need to combine CCGbank syntactic types with the semantic types of Bos (Bos et al., 2004). There are also many types of dependencies, such as those for rightward movement and correlative focus (either..or, neither..nor, both..and), which could be delixicalised to reduce the size of the supertag sets further. Of course, the more null heads that are allowed freely into the chart, the stronger the statistical model of the derivation itself must be. Finally, the MGbank grammar (particularly in its reified versions) is precision-oriented, in the sense that it blocks many ungrammatical sentence types (agreement/l-selection violations, binding theory violations, (anti)that-trace violations, wh-island violations etc). The extra information needed to attain this precision expands the tag set but should also ultimately help in pruning the search space, enabling the parser to try more tags. The CCGbank grammar, meanwhile, is much more flexible (making it very robust), and therefore leaves a much greater proportion of the task of constraining the search space to the probability model. The 1-best accuracies are clearly not high enough to be practical for wide-coverage MG parsing at present. By the time the 3-best supertags per word are considered, however, the accuracies are in all cases quite high, and by the 25best they are very high, although it is difficult to say at this point what level will be sufficient for wide-coverage parsing. The overt atomic tagging is much better, achieving high accuracy by the 3best, but these tags contain the least information and therefore leave much more disambiguation to the parsing model. Clearly, using MG supertags will require an algorithm that navigates the search space as efficiently as possible and allows the supertagger to try as many tags for each word as possible. We are in the process of re-implementing the A* search algorithm of (Lewis and Steedman, 2014), which allows their CCG parser to consider the complete distribution of 425 supertags for each word. The potential efficiency advantages of parsing with MG supertags are considerable: reparsing the seed set of 960 trees (which includes 207 sentences which were added to cover some constructions not found in the Penn Treebank) takes over 8 hours on a 1.4GHz Intel Core i5 Macbook Air with a perfect oracle providing the 1-best overt atomic tag, but just over 6 minutes using reified supertags. 5 Conclusion We presented two methods for constraining the parser’s search space and improving efficiency during wide-coverage MG parsing. The first extends the formalism with mechanisms for enforcing morphosyntactic agreements and selectional restrictions. The second anchors computationally costly null heads to overt heads inside complex overt categories, rendering the formalism fully compatible with Markovian supertagging techniques. Both techniques have proven useful for the generation of MGbank. We are now working on an A* MG parser which can consider the full distribution of supertags for each word and exploit the potential of these rich lexical categories. Acknowledgments A big thank you is due to Mark Steedman, as well as to the four anonymous reviewers for all their helpful comments and suggestions. Most especially, I would like to thank Miloˇs Stanojevi´c, who coded up the LSTM supertagger, ran the experiments reported in this paper and made some very helpful suggestions regarding the supertagging method described above. This project was supported by the Engineering and Physical Sciences Research Council (EPSRC) and an ERC H2020 Advanced Fellowship GA 742137 SEMANTAX and a Google Faculty Award. 599 References Mark C Baker. 1988. Incorporation: A theory of grammatical function changing. University of Chicago Press. Srinivas Bangalore and Aravind Joshi. 1999. Supertagging: An approach to almost parsing. Computational Linguistics, 25:237–265. Cedric Boeckx, Norbert Hornstein, and Jairo Nunes. 2010. Control as Movement. Cambridge University Press, Cambridge, UK. Johan Bos, Stephen Clark, Mark Steedman, James R Curran, and Julia Hockenmaier. 2004. Widecoverage semantic representations from a ccg parser. In Proceedings of the 20th international conference on Computational Linguistics, page 1240. Association for Computational Linguistics. Eugene Charniak. 2000. A maximum-entropy-inspired parser. In Proceedings of the 1st Meeting of the North American Chapter of the Association for Computational Linguistics, pages 132–139, Seattle, WA. Noam Chomsky. 1995. The Minimalist Program. MIT Press, Cambridge, Massachusetts. Noam Chomsky. 2000. Minimalist inquiries: The framework. In Roger Martin, David Michaels, and Juan Uriagereka, editors, Step by Step: Essays in Minimalist Syntax in Honor of Howard Lasnik, pages 89–155. MIT Press, Cambridge, MA. Noam Chomsky. 2001. Derivation by phase. Ken Hale: A life in language, pages 1–52. Noam Chomsky. 2008. On phases. In Robert Freidin, Carlos Peregrin Otero, and Maria Zubizarreta, editors, Foundational Issues in Linguistic Theory: Essays in Honor of Jean-Roger Vergnaud, pages 133– 166. MIT Press. Stephen Clark and James R Curran. 2007. Widecoverage efficient statistical parsing with ccg and log-linear models. Computational Linguistics, 33(4):493–552. Michael Collins. 1997. Three generative lexicalized models for statistical parsing. In Proceedings of the 35th Annual Meeting of the Association for Computational Linguistics, pages 16–23, Madrid. ACL. B. Cramer and Y. Zhang. 2010. Constraining robust constructions for broad-coverage parsing with precision grammars. In Proceedings of the 23rd International Conference on Computational Linguistics (Coling 2010), pages 223–231, Beijing. Meaghan Fowlie and Alexander Koller. 2017. Parsing minimalist languages with interpreted regular tree grammars. In Proceedings of the Thirteenth International Workshop on Tree Adjoining Grammar and Related Formalisms (TAG+13), pages 11–20. Jane Grimshaw. 1991. Extended projection. Unpublished manuscript, Brandeis University, Waltham, Mass. (Also appeared in J. Grimshaw (2005), Words and Structure, Stanford: CSLI). Hendrik Harkema. 2001. Parsing Minimalist Languages. Ph.D. thesis, UCLA, Los Angeles, California. Norbert Hornstein, Jairo Nunes, and Kleanthes Grohmann. 2005. Understanding Minimalism. Cambridge University Press. Jungo Kasai, Bob Frank, Tom McCoy, Owen Rambow, and Alexis Nasr. 2017. Tag parsing with neural networks and vector representations of supertags. In Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing, pages 1712–1722. Gregory M. Kobele. 2005. Features moving madly: A formal perspective on feature percolation in the minimalist program. Research on Language and Computation, 3(4):391–410. Hilda Koopman and Dominique Sportiche. 1991. The position of subjects. Lingua, 85(2-3):211–258. Richard Larson. 1988. On the double object construction. Linguistic Inquiry, 19:335–392. Mike Lewis, Kenton Lee, and Luke Zettlemoyer. 2016. Lstm ccg parsing. In Proceedings of the 2016 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 221–231. Mike Lewis and Mark Steedman. 2014. A* ccg parsing with a supertag-factored model. In Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing (EMNLP), pages 990–1000. Mitchell P Marcus, Mary Ann Marcinkiewicz, and Beatrice Santorini. 1993. Building a large annotated corpus of english: The penn treebank. Computational linguistics, 19(2):313–330. Ryan McDonald and Fernando Pereira. 2006. Discriminative learning and spanning tree algorithms for dependency parsing. University of Pennsylvania. Jens Michaelis. 1998. Derivational minimalism is mildly context–sensitive. In International Conference on Logical Aspects of Computational Linguistics, pages 179–198. Springer. Joakim Nivre, Johan Hall, and Jens Nilsson. 2006. Maltparser: A data-driven parser-generator for dependency parsing. In Proceedings of LREC, volume 6, pages 2216–2219. Joakim Nivre, Laura Rimell, Ryan McDonald, and Carlos Gomez-Rodriguez. 2010. Evaluation of dependency parsers on unbounded dependencies. In Proceedings of the 23rd International Conference on Computational Linguistics, pages 833–841. Association for Computational Linguistics. 600 David Pesetsky. 1991. Zero syntax: Vol. 2: Infinitives. Unpublished MS., MIT. Carl Pollard and Ivan Sag. 1994. Head Driven Phrase Structure Grammar. CSLI Publications, Stanford, CA. Andrew Radford. 2004. Minimalist Syntax: Exploring the Structure of English. Cambridge University Press. Laura Rimell, Stephen Clark, and Mark Steedman. 2009. Unbounded dependency recovery for parser evaluation. In Proceedings of the Conference on Empirical Methods in Natural Language Processing, pages 813–821, Singapore. ACL. Ian G Roberts. 2010. Agreement and head movement: Clitics, incorporation, and defective goals, volume 59 of Linguistic Inquiry Monograph. MIT Press. Yves Schabes, Anne Abeille, and Aravind K Joshi. 1988. Parsing strategies with ‘lexicalized’ grammars: application to tree adjoining grammars. In Proceedings of the 12th conference on Computational linguistics-Volume 2, pages 578–583. Association for Computational Linguistics. Edward Stabler. 1997. Derivational minimalism. In Logical Aspects of Computational Linguistics (LACL’96), volume 1328 of Lecture Notes in Computer Science, pages 68–95, New York. Springer. Edward P. Stabler. 2001. Recognizing head movement. In Logical Aspects of Computational Linguistics: 4th International Conference, LACL 2001, Le Croisic, France, June 27-29, 2001, Proceedings., volume 4, pages 245–260. Mark Steedman and Jason Baldridge. 2011. Combinatory categorial grammar. In Robert Borsley and Kirsti B¨orjars, editors, Non-Transformational Syntax: A Guide to Current Models, pages 181–224. Blackwell, Oxford. John Torr. 2017. Autobank: a semi-automatic annotation tool for developing deep minimalist grammar treebanks. In Proceedings of the EACL 2017 Software Demonstrations, Valencia, Spain, April 37 2017, pages 81–86. John Torr and Edward P. Stabler. 2016. Coordination in minimalist grammars: Excorporation and across the board (head) movement. In Proceedings of the Twelfth International Workshop on Tree Adjoining Grammar and Related Formalisms (TAG+12), pages 1–17. Huijia Wu, Jiajun Zhang, and Chengqing Zong. 2017. A dynamic window neural network for ccg supertagging. In Proceedings of the Thirty-First AAAI Conference on Artificial Intelligence (AAAI-17), pages 3337–3343. Wenduan Xu. 2016. Lstm shift-reduce ccg parsing. In Proceedings of the 2016 Conference on Empirical Methods in Natural Language Processing, pages 1754–1764.
2018
55
Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Long Papers), pages 601–610 Melbourne, Australia, July 15 - 20, 2018. c⃝2018 Association for Computational Linguistics 601 Not that much power: Linguistic alignment is influenced more by low-level linguistic features rather than social power Yang Xu, and Jeremy Cole and David Reitter College of Information Sciences and Technology The Pennsylvania State University [email protected] and [email protected] and [email protected] Abstract Linguistic alignment between dialogue partners has been claimed to be affected by their relative social power. A common finding has been that interlocutors of higher power tend to receive more alignment than those of lower power. However, these studies overlook some low-level linguistic features that can also affect alignment, which casts doubts on these findings. This work characterizes the effect of power on alignment with logistic regression models in two datasets, finding that the effect vanishes or is reversed after controlling for low-level features such as utterance length. Thus, linguistic alignment is explained better by low-level features than by social power. We argue that a wider range of factors, especially cognitive factors, need to be taken into account for future studies on observational data when social factors of language use are in question. 1 Introduction The effect of social power on language use in conversations has been widely studied. The Communication Accommodation Theory (Giles, 2008) states that the social power of speakers influence the extent to which conversation partners accommodate (or align, coordinate) their communicating styles towards them. This theory is supported by findings from qualitative studies on employment interviews (Willemyns et al., 1997), classroom talks (Jones et al., 1999), and the more recent data-driven studies on large online communities and court conversations (Danescu-NiculescuMizil et al., 2012; Jones et al., 2014; Noble and Fernández, 2015). In particular, DanescuNiculescu-Mizil et al. (2012) uses a probabilitybased measure of linguistic alignment to demonstrate that people align more towards conversation partners of higher power, i.e., the admin users in Wikipedia talk-page, and the justices in U.S. supreme court conversations, than those of lower power, i.e., the non-admin users and the lawyers. However, while these results find sound explanations from socio-linguistic theories, they are still somewhat surprising from the perspective of cognitive mechanisms of language production, because the mutual alignment between interlocutors of in natural dialogue can be explained by an automatic and low-level priming process (Pickering and Garrod, 2004). It is known that the strength of alignment is sensitive to low-level linguistic features (e.g., words, syntactic structures etc.), such as temporal clustering properties (Myslín and Levy, 2016), syntactic surprisal measured by prediction error (Jaeger and Snider, 2013), and lexical information density (Xu and Reitter, 2018). Then why, or under what mechanisms, can alignment be affected by the relatively high-level social perceptions of power as reported? Could it be the case that the effect of power on alignment is actually due to the other low level features in language, such as the ones mentioned above? Is the effect of power still observable, if we control for other factors? How large is the effect? Is it significant enough to be captured by computational measures of alignment? Answering these questions will help clarify the role of social factors in linguistic alignment, and improve our understanding of language production in general. In this study, we conduct a two-step model analysis. First, we use a basic model that has two predictors, count (number of a certain linguistic marker in the preceding utterance) and power (power status of the preceding speaker), to predict the occurrence of the same marker in the follow602 ing utterance. Here, the linguistic markers are derived from 11 Linguistic Inquiry and Word Count (LIWC; Pennebaker et al., 2001) categories (e.g., article, adverb, etc.). With the basic model, the main effect of count characterizes the strength of alignment, and the interaction between count and power characterizes the effect of power on alignment (Section 3). Second, we use an extended model that includes a third predictor, utterance length (It is chosen as a typical low-level linguistic feature, discussed in Section 2.3), on top of the basic model. With the extended model, we aim to examine whether the inclusion of utterance length will influence the interaction between count and power (Section 4). Therefore, we can examine the extent to which the effect of power on alignment is confounded by low-level linguistic features. To clarify, our goal is not to disprove the existence of social accommodation in dialogue. Nonetheless, it is important to distinguish between what is caused by automatic priming-based alignment and what is caused by high-level, intentional accommodation. As we will discuss, these are different processes with different predictions. Throughout this paper we use the term alignment to refer to the priming-based process, and accommodation to refer to the intentional process. 2 Related Work 2.1 Social power and linguistic alignment The social factors of language use have been widely studied. Communication Accommodation Theory (Giles, 2008) posits that individuals adapt their communication styles to increase or decrease the social distance from their interlocutors. One factor that affects the adaptation of linguistic styles is social power. Typically, people of lower power converge their linguistic styles to those of higher power; for example, interviewees towards interviewers (Willemyns et al., 1997), or students towards teachers (Jones et al., 1999). More recently, sensitive quantitative methods have been applied to this line of inquiry. Danescu-Niculescu-Mizil et al. (2012) computed the probability-based linguistic coordination measure among Wikipedia editors and participants of the US supreme court, and they showed that people with low power (e.g., lawyers, non-admins) exhibit greater coordination than people with high power (Justices, admins). Using the same data, Noble and Fernández (2015) found that linguistic coordination is positively correlated with social network centrality, and this effect is even greater than the effect of power status distinction. The aforementioned studies do not include lowlevel language features in their analysis and thus overlook the possibility that cognitive mechanisms may be able to more readily explain the data. Importantly, as we will later discuss, these studies use a measurement of alignment that we believe is more appropriately measuring the automatic process, rather than the intentional one. 2.2 Quantifying linguistic alignment A variety of computational measures of linguistic alignment have been developed. Some quantify the increase in conditional probability of certain elements (words or word types) given that they have appeared earlier (Church, 2000; DanescuNiculescu-Mizil et al., 2012). Some compute the proportion of repeated lexical entries or syntactic rules between two pieces of text (Fusaroli et al., 2012; Wang et al., 2014; Xu and Reitter, 2015). Some use the coefficients returned by generalized linear models (McCullagh, 1984; Breslow and Clayton, 1993; Lindstrom and Bates, 1990) as an index of alignment (Reitter and Moore, 2014). A large body of the existing computational measures intensively use LIWC (Pennebaker et al., 2001) to construct representations of language users’ styles, which can be used to measure alignment with distance-like metrics (Niederhoffer and Pennebaker, 2002; Jones et al., 2014). Many of these approaches do not distinguish between different levels of linguistic analysis and different psycholinguistic processes (phonological, lexical, syntactic, etc.), and neither do we. Alignment is consistently present across these levels and processes, although it is not as clear in naturalistic language as it is in the constrained utterances of experiments, particularly at the syntactic level (Healey et al., 2014). We are concerned with the question of whether alignment is a socially linked, intentional adaptation process, as opposed to addressing any particular cognitive model. More recently, Doyle et al. (2016) pointed out that most existing measures are difficult to compare, and emphasized the need for a universal measure. The Hierarchical Alignment Model (HAM; Doyle et al., 2016) and Word-Based HAM (WHAM; Doyle and Frank, 2016) use statistical inference techniques, which out-perform other 603 measures in terms of robustness of capturing linguistic alignment in social media conversations. In this study, we choose to use generalized linear models to quantify linguistic alignment, avoiding issues with more complex, and less inspectable models. For instance, the commonly used probability based methods and their more advanced variants (HAM and WHAM) lack the flexibility to jointly examine multiple factors (e.g., speaker groups, utterance length etc.) that influence alignment. Another issue is that they do not take into account the number of occurrences of linguistic markers, which is known to affect alignment (see Section 2.3). Conversely, though linear models do not give an accurate per-speaker estimate of alignment (which we do not need for the purpose of this study), they do provide the ability to examine multiple factors that influence alignment by simply including multiple predictors in the model. As should be clear, a generalized linear model also already takes into account baseline usage with a fitted intercept. Given these considerations, we use generalized linear models for quantitative analysis. The formulation of our models is described in Sections 3.2 and 4.1. 2.3 Cognitive constraints on linguistic alignment: why utterance length matters There are many, at times competing, cognitive explanations of linguistic alignment in both comprehension and production. Jaeger and Snider (2013) explained alignment as a consequence of expectation adaptation, and they found that stronger alignment is associated with syntactic structures that have higher surprisal (roughly speaking, less common). Alignment in language production can also be modeled as a general memory phenomenon (Reitter et al., 2011), which explains a number of known interaction effects. Myslín and Levy (2016) found that sentence comprehension is faster when the same syntactic structure clusters in time in prior experience than when it is evenly spaced in time. Myslín and Levy (2016) cast comprehension priming as the rational expectation for repetition of stimuli. Though this result is not directly related to comprehensionto-production priming, it makes sense to anticipate that production could also be sensitive to the clustering patterns of linguistic elements, because comprehension and production are closely coupled processes (Pickering and Garrod, 2007). Utterance length, i.e., the number of words in utterance, is a feature that closely relates to both surprisal and clustering properties. Longer utterances tend to have higher syntactic surprisal (Xu and Reitter, 2016a), and it is reasonable to assume they tend to contain more evenly distributed stimuli. Thus, utterance length is a low-level linguistic feature that correlates with many of the causes of alignment. In this way, we use utterance length as a stand-in for low-level linguistic features as a whole when comparing it with social power, a much higher-level feature. Examining alignment (in social science research and elsewhere) therefore calls for controlling sentence length. 3 Experiment 1: Basic model In Experiment 1, we justify the practice of using generalized linear models to quantify linguistic alignment. We compare two ways of characterizing the occurrence of LIWC-derived markers in a preceding utterance, binary presence and numeric count, to determine which results in better model. We use an interaction term in the model to quantify the effect of the power status of speakers on linguistic alignment, which serves as the basis for the following sections. 3.1 Corpus data We use two datasets compiled by DanescuNiculescu-Mizil et al. (2012): Wikipedia talkpage corpus (Wiki) and a corpus of United States supreme court conversations (SC). Wiki is a collection of conversations from Wikipedia editor’s talk Pages1, which contains 125,292 conversations contributed by 30,732 editors. SC is a collection of conversations from the U.S. Supreme Court Oral Arguments2, with 51,498 utterances making up 50,389 conversational exchanges, from 204 cases involving 11 Justices and 311 other participants (lawyers or amici curiae). A conversation consists of a sequence of utterances, {ui}(i = 1, 2, . . . , N), where N is the total number of utterances in the conversation. Because people take turns to talk in conversation, ui and ui+1 are always from different speakers. Since our interest here is the alignment between different speakers (as opposed to within the same speaker), we use a sliding window of size 2 to go through 1http://en.wikipedia.org/wiki/ Wikipedia:Talk_page_guidelines 2http://www.supremecourt.gov/oral_ arguments/ 604 the whole conversation, generating a sequence of adjacent utterance pairs, {⟨primei, targeti⟩}(i = 1, 2 . . . N −1). Next, we process each utterance ui by counting the number of occurrences of 14 linguistic markers that are derived from LIWC categories, resulting in 14 counts for each utterance. These 14 linguistic markers are: high frequency adverbs (adv), articles (art), auxiliary verbs (auxv), certainty (certain), conjunctions (conj), discrepancy (discrep), exclusion (excl), inclusion (incl), impersonal pronouns (ipron), negations (negate), personal pronouns (ppron), prepositions (prep), quantifiers (quant), and tentativeness (tentat). These fourteen markers come from taking the union of the 8 markers used by Danescu-Niculescu-Mizil et al. (2012) and the 11 markers used by Doyle and Frank (2016), which are the main studies we wanted to compare with. 3.2 Statistical models We formulate alignment as the impact of using certain linguistic elements in the preceding utterance on their chance to appear again in the following utterance. In the language of generalized linear models, we use the occurrence of linguistic markers in target as the response variable and the predictor is their occurrence in prime. These occurrences can be represented as either a boolean or a count. Thus alignment is characterized by the β coefficient of the predictor, which allows the model to distinguish the prevalence of Occurrence or another feature in primed situations as compared to its prior in the corpus. Factors that may influence alignment (e.g., social power) can then be examined by adding a corresponding interaction term to the model. Our first step, then, is to replicate the previous studies’ findings of the effect of social power on alignment. Two models were fitted, predicting the presence of the linguistic marker m in target utterance over its absence. We fit models both corresponding to a binary predictor (Cpresence) and a count-based one (Ccount). Both models include a second binary predictor, Cpower, indicating the power status of the prime speaker (high vs. low), and its interaction with Cpresence and Ccount, respectively. Additionally, random intercepts on linguistic marker and target speaker are fitted, based on the consideration that individuals might have different levels of alignment towards different markers. Ccount is log-transformed to maximize model fit according to Bayesian Information Criterion; this is commensurate with standard psycholinguistic practice and known cumulative priming and memory effects. Equation (1) shows the count-based model. To reiterate, the interaction term Ccount ∗Cpower characterizes the effect of power on alignment. logit(m) = ln p(m in target) p(m not in target) = β0 + β1Ccount + β2Cpower + β3Ccount ∗Cpower (1) 3.3 Model coefficients The main effects of Cpresence and Ccount are significant (p < 0.001) and positive in both corpora (SC: βpresence = 0.439, βcount = 0.291; Wiki: βpresence = 0.440, βcount = 0.395), which captures the linguistic alignment from prime to target. However, there is difference in how alignment is influenced by power between the two corpora: In SC, Ccount ∗Cpower is significant (β = 0.078, p < 0.001), but Cpresence ∗Cpower is non-significant; In Wiki, on the contrary, Cpresence ∗Cpower is marginally significant (β = 0.014, p = .055), but Ccount ∗Cpower in not significant. No collinearity is found between Ccount (or Cpresence) and Cpower (Pearson correlation r < 0.2). To explore why using Cpresence vs. Ccount results in different significance levels for SC and Wiki, we fit a individual linear model for each linguistic marker, using 14 disjoint subsets of each corpus. We present the z scores and significance levels of the two interaction terms are reported in Table 1. First, in SC the interaction term Cpresence ∗Cpower is significant for 9 out of 14 markers. In Wiki, Ccount ∗Cpower is significant for 5 out of 14 markers. This suggests that the interaction between the occurrence of linguistic markers and the power status of speakers exists within a subset of the linguistic categories, but not across all of them. Thus, we consider this first experiment a replication of past findings of the effect of social power on alignment: social power has a significant effect across certain markers, but its overall effect is neutralized in the full model since some markers at not significant. This analysis also revealed that Ccount ∗Cpower is more reliable in capturing this effect, which is what we will use in the following experiment. 605 Table 1: Summary of the 14 models that fit individual markers on disjoint data subsets. Wald’s z-score and significance level (∗∗∗for p < 0.001, ∗∗for p < 0.01, ∗for p < 0.05, and † for 0.05 < p < 0.1) of the interaction terms (Cpresence∗Cpower or Ccount ∗Cpower) are reported. Marker z score Cpresence ∗Cpower Ccount ∗Cpower SC Wiki SC Wiki adv 1.19 -0.33 6.16*** -0.40 art 1.99* 0.36 4.60*** 1.27 auxv 3.72*** -0.62 5.81*** -0.83 certain -0.02 3.19** 1.94† 2.84** conj 0.54 -0.20 6.79*** 0.39 discrep 5.44*** -0.05 8.03*** 0.25 excl -0.53 1.96* 2.94** 2.16* incl 2.86** 0.80 5.24*** 2.15* ipron 6.84*** 1.70† 10.22*** 1.90† negate 2.83** 3.14** 5.49*** 3.11** ppron 2.74** -1.86† 1.29 -1.13 prep 4.76*** 2.37* 6.87*** -0.19 quant 0.89 1.01 4.14*** -0.04 tentat 3.69*** 0.17 4.52*** -0.78 3.4 Visualizing the effect of power To better understand the interaction term Ccount ∗ Cpower, we divide the data into two groups by whether Cpower is high or low, and fit a model on each of the groups. In the models we include only one predictor Ccount (see Equation (2)). Then we compare the main effects (β1 coefficients) from the two groups. logit(m) = β0 + β1Ccount (2) Unsurprisingly, the main effects of Ccount are significant for both groups (p < 0.001). But more importantly, the β1 coefficients of the high power group are larger than those of the low power group. For SC, the difference is very salient: βhigh 1 = 0.416 (SE = 0.006), βlow 1 = 0.272 (SE = 0.005). For Wiki, the difference is smaller: βhigh 1 = 0.424 (SE = 0.007), βlow 1 = 0.386 (SE = 0.005). This is in line with the nonsignificant coefficient of Ccount ∗Cpower in Wiki. In fact, the models of Wiki are fitted on a subset of data that contain the 5 (out of 14) markers that have significant coefficients of Ccount ∗Cpower in the individual models shown in Table 1 (certain, excl, incl, ipron, negate), so that the difference in slopes is presented at maximal degree. In Figure 1 we illustrate the βhigh and βlow coefficients of Ccount by plotting the predicted probability (the reversed logit transformation of the lefthand side term of Equation (2)) against Ccount (logtransformed). It is obvious that the slope of βhigh is larger than that of βlow (more salient in SC), indicating the significant interaction between Ccount and Cpower. 0.25 0.50 0.75 1.00 0 1 2 3 4 log( Ccount) Predicted probability (a) Supreme Court 0.00 0.25 0.50 0.75 1.00 0 1 2 3 log( Ccount) Predicted probability Power High Low Count 100 101 102 103 104 (b) Wikipedia Figure 1: The predicted probability of marker appearing in target (the reverse logit transform of the left hand side of Equation (2)) against the number of markers in prime, i.e., Ccount (log-transformed), grouped by the power of prime speaker, i.e., high vs. low. Divergent slopes indicate significant interactions. Colored hexagons indicate the number of data points within that region. 3.5 Discussion The occurrence of linguistic markers in prime is a strong predictor of whether the same marker will appear again in target. The coefficients of Ccount can be viewed as indicators of the linguistic alignment between interlocutors: larger positive βs indicate stronger alignment, while smaller or even negative βs indicate weaker and reverse alignment, respectively (not found in our data). Our results confirm the previously reported effect of power on linguistic alignment. The significant β′ coefficient of Ccount∗Cpower means that the β of Ccount is dependent on Cpower. In other words, the strength of alignment varies significantly depending on different power levels (i.e., high vs. low) of the prime speaker (reflected by the different slopes in Figure 1). However, we need to keep in mind that this affirmative finding is not safe, because it based on a simple model that has only one key predictor, Cpower. According to our hypothesis, the strength of alignment can be influenced by a lot of low-level linguistic features, and we are not sure yet if the effect of power will still be visible after we includes more predictors representing 606 those features. This will be the next step experiment. Additionally, the results also suggest that the influence of power on linguistic alignment is better characterized by the more fine-grained cumulative effect of linguistic markers than when it is simply explained by the mere difference between their absence or presence. Thus, we will discard Cpresence and proceed with Ccount. 4 Experiment 2: Extended model In our first experiment, we replicated the effect of prime speakers’ power status on the linguistic alignment from target speakers, from the significant interaction term Ccount ∗Cpower. Now, we want to determine if the effect of power remains significant after taking into account utterance length. As discussed, our hypothesis is that alignment (as measured by changes in probability of using LIWC categories) is best explained by low-level linguistic features that would be taken into account by an automatic priming process. 4.1 Statistical models We add a new predictor to Equation (1), CpLen, which is the number of words in prime, resulting in an extended model shown in Equation (3). We are interested to see if β4 remains significant when the other interaction terms (with corresponding coefficients β5, β6 and β7) are added. logit(m) = ln p(m in target) p(m not in target) = β0 + β1Ccount + β2Cpower + β3CpLen + β4Ccount ∗Cpower + β5Ccount ∗CpLen + β6Cpower ∗CpLen + β7Ccount ∗Cpower ∗CpLen (3) Note that we used the same subset of Wiki as used in Section 3.4 (using the five most significant LIWC categories), so that the strongest effect of Ccount ∗Cpower is considered. 4.2 Model coefficients The coefficients of the full model are in Table 2. Surprisingly, the coefficient of Ccount ∗Cpower is significantly negative in SC, and non-significant in Wiki (see highlighted rows), which are in contrast to the positive coefficients of the same term Table 2: Summary of the model described in Equation (3): β coefficients, Wald’s z-score and significance level (∗∗∗for p < 0.001, ∗∗for p < 0.01, ∗for p < 0.05) for all predictors and interactions. Corpus Predictor β z SC Intercept 0.360 2.40* Ccount 0.213 26.92*** Cpower -0.060 -3.39*** CpLen 0.080 13.03*** Ccount ∗Cpower -0.103 -9.95*** Ccount ∗CpLen -0.066 -15.35*** Cpower ∗CpLen 0.231 25.25*** Ccount ∗Cpower ∗CpLen 0.036 4.79*** Wiki Intercept 0.330 1.40 Ccount 0.149 31.11*** Cpower -0.074 -10.52*** CpLen 0.179 40.80*** Ccount ∗Cpower 0.001 0.14 Ccount ∗CpLen 0.022 6.13*** Cpower ∗CpLen 0.042 5.52*** Ccount ∗Cpower ∗CpLen -0.010 -1.61 in Table 1. It indicates that the observed effect of power on alignment depends on the presence of CpLen in the model. No collinearity is found between Cpower and other predictors: Pearson correlation r < 0.2; Variance inflation factor (VIF) is low (< 2.0) (O’brien, 2007). To further demonstrate how the coefficient of Cpower ∗Ccount is dependent on CpLen, we remove Ccount ∗CpLen, Cpower ∗CpLen and Ccount ∗Cpower ∗ CpLen from Equation (3) stepwisely, and examine Ccount ∗Cpower in the corresponding remaining models. z-scores, significance levels, and the Akaike information criterion (AIC) score (Akaike, 1998) of the remainder models are reported in Table 3. In the full model, and when Ccount ∗Cpower ∗ CpLen or Ccount∗CpLen is removed from the model, the coefficients of Cpower ∗Ccount are significantly negative in SC and non-significant in Wiki. Only when Cpower∗CpLen is removed, the coefficients of Ccount ∗Cpower become significantly positive (the last two rows in Table 3). However, the models that have negative or non-significant coefficient for Cpower ∗Ccount have lower AIC scores than those that have positive coefficient (The full model has the lowest AIC score), which indicates that the former ones have higher quality. Altogether, the stepwise analysis not only indicates that the positive interaction between Cpower and Ccount shown in our basic model (Section 3) is unreliable, but also suggests that a negative interaction (SC) or 607 non-significant interaction is more preferable. 4.3 Visualizing interaction effect To illustrate how the interaction Cpower ∗Ccount diminishes after adding CpLen into the extended model, we cluster different ranges of CpLen and determine how the amount of priming changes with Ccount w.r.t. different combinations of Cpower and CpLen. This is a common practice to interpret linear models with three-way interactions (Houslay, 2014). To cluster, we first compute the mean of CpLen (i.e., the average utterance length), MpLen. Then we divide the data by whether CpLen is above or below MpLen. Then we compute the mean of CpLen for the upper and lower parts of data, resulting in ML pLen and MS pLen respectively (L for long and S for short). Now, we can replace the continuous variable CpLen to a categorical and ordinal one that has two values, {MS pLen, ML pLen}, which represent the length of relatively short and long utterances respectively. Together with the other categorical variable, Cpower, which has two values, high and low, we have four combinations: CpLen = MS pLen and Cpower = high (SH), CpLen = ML pLen and Cpower = high (LH), CpLen = MS pLen and Cpower = low (SL), CpLen = ML pLen and Cpower = low (LL). In Figure 2 we plot the smoothed regression lines of predicted probability against Ccount, w.r.t. the above four groups of CpLen and Cpower combinations. Here Ccount is not log-transformed, because it better demonstrates the trend of the fitted regression lines. Figure 2 intuitively shows that CpLen is a more determinant predictor than Cpower. Division by power, i.e., high (SH and LH groups) vs. low (SL and LL groups), does not result in a salient difference in slopes, as it can be seen that the slopes of high (solid) and low (dashed) power lines do not differ much from each other within the same prime utterance length group (indicated by color). However, division by prime utterance length, i.e. short (SH and SL) vs. long (LH and LL), results in very significant differences in slopes: in Figure 2a, short CpLen group (orange) has larger slopes than long CpLen group (blue), while in Figure 2b, short group has smaller slopes than long group. 4.4 Discussion Adding CpLen to the model has strong impact on the previous conclusion about the effect of power 0.25 0.50 0.75 0 2 4 6 8 Ccount Predicted probability (a) Supreme Court 0.00 0.25 0.50 0.75 1.00 0 2 4 6 8 Ccount Predicted probability Combination Long pLen & High power (LH) Long pLen & Low power (LL) Short pLen & High power (SH) Short pLen & Low power (SL) Count 100 101 102 103 104 (b) Wikipedia Figure 2: The predicted probability of marker appearing in target against Ccount, grouped by the four combinations of CpLen (long vs. short, indicated by color) and Cpower (high vs. low, indicated by line type): LH, LL, SH, and SL. Colored hexagon indicates the number of data points. on alignment. First of all, we find a negative interaction between Ccount and Cpower in SC and a nonsignificant effect in Wiki, which is contrary to the previous findings reported by Danescu-NiculescuMizil et al. (2012). Moreover, we doubt the reliability of a positive interaction because the valence of its β varies when other interaction terms (associated with CpLen) are removed or added, and a negative or non-significant interaction is preferred 608 Table 3: Wald’s z-score and significance level (∗∗∗for p < 0.001) of the Ccount∗Cpower term, and the AIC scores of the remainder models after removing other interaction terms from the full model stepwisely. The full model is described in Equation (3). Remainder model SC Wiki z-score AIC z-score AIC Full -9.95*** 697588 0.14 890685 Full −Ccount ∗Cpower ∗CpLen -8.75*** 697609 -0.62 890686 Full −Ccount ∗Cpower ∗CpLen −Ccount ∗CpLen -5.61*** 697838.9 -0.74 890723.5 Full −Ccount ∗Cpower ∗CpLen −Cpower ∗CpLen 10.90*** 698254.7 3.85*** 890726.7 Full −Ccount ∗Cpower ∗CpLen −Ccount ∗CpLen −Cpower ∗CpLen 15.02*** 698461.8 3.72*** 890763.8 by a simple model selection criterion. Second, there is a significant interaction between Ccount and CpLen, though it is in different directions for the two corpora: negative β in SC and positive β in Wiki. Both observations have some theoretical justification from previous studies. Myslín and Levy (2016)’s work is in favor of the negative β: language comprehension is facilitated by the clustering of linguistic stimuli in time. In our case, the linguistic markers in the utterance of speaker A function as stimuli to speaker B. A longer utterance means that all the stimuli span wider in time, and thus demonstrate less clustering, which make the stimuli less salient features for speaker B to adapt to. This in turn causes speaker B to be less likely reuse those stimuli in the near future. Meanwhile, evidence from the line of works on surprisal and syntactic priming supports the positive β. In syntactic alignment, structures with higher surprisal (less common) are associated with stronger alignment (Jaeger and Snider, 2013; Reitter and Moore, 2014). Since surprisal has been found to be closely related with utterance length in dialogue (Genzel and Charniak, 2003; Xu and Reitter, 2016b,a), it is reasonable to expect that longer utterances receive stronger alignment because they contain content of higher surprisal. The discrepancy between Wiki and SC in terms of the direction of Ccount ∗CpLen is an interesting phenomenon to explore, because it can tell us something about how the form of dialogue (Wiki consists of online conversations and SC consists of face-to-face ones) affects the underlying cognitive mechanism of language production. Regardless, our main finding is that low-level linguistic features, such as utterance length, have a strong effect on linguistic alignment. These effects are an important confound to take into account when examining higher-level features, such as social power. In particular, the effect of social power cannot be reliably detected by linear models once introducing utterance length. Another interesting piece of result is the significant interaction term Cpower ∗CpLen, which implies that the power status of speaker and how long he/she tends to speak are not totally unrelated. Significant but weak correlation are found between Cpower and CpLen (using Pearson’s correlation score): r = −0.059 in SC; r = −0.018 in Wiki. This correlation may show some kind of a linguistic manifestation of social power, but since it is not directly related to the alignment process, we do not further discuss it in this paper. In summary of the results, we conjecture that the previously reported effect of power (DanescuNiculescu-Mizil et al., 2012) is likely to be caused by the correlation between power status and utterance length, though further investigation is needed to confirm this. Moreover, utterance length is just one simple factor, and there are many more other linguistic features that can correlate with social power: e.g., the surprisal based measure of lexi609 cal information etc. 5 Conclusion To sum up, our findings suggest that the previously reported effect of power on linguistic alignment is not reliable. Instead, we consistently align towards language that shares certain low-level features. We call for the inclusion of a wider range of factors in future studies of social influences on language use, especially low-level but interpretable cognitive factors. Perhaps in most scenarios, alignment is primarily influenced by linguistic features themselves, rather than social power. We are not denying the existence of accommodation caused by the social distance between interlocutors. However, we want to stress the difference between the priming-induced alignment at lower linguistic levels and the intentional accommodation that is caused by higher-level perception of social power. The latter should be a relatively stable effect that is independent on the lowlevel linguistic features. In particular, our findings suggest that the probability change of LIWC categories is more likely to be a case of automatic alignment, rather than an intentional accommodation, because it is better explained by lowerlevel linguistic features (utterance length). Therefore, we suggest that future work on social power and language use should consider other (maybe higher-level) linguistic elements. Acknowledgement We sincerely thank Lizhao Ge for her useful advice on the statistical methods used. The work leading to this paper was funded by the National Science Foundation (IIS-1459300 and BCS1457992). References Hirotogu Akaike. 1998. Information theory and an extension of the maximum likelihood principle. In Selected Papers of Hirotugu Akaike, pages 199–213. Springer. Norman E Breslow and David G Clayton. 1993. Approximate inference in generalized linear mixed models. Journal of the American Statistical Association, 88(421):9–25. Kenneth W Church. 2000. Empirical estimates of adaptation: the chance of two Noriega’s is closer to p/2 than p2. In Proceedings of the 18th Conference on Computational Linguistics, volume 1, pages 180– 186, Saarbrücken, Germany. Cristian Danescu-Niculescu-Mizil, Lillian Lee, Bo Pang, and Jon Kleinberg. 2012. Echoes of power: Language effects and power differences in social interaction. In Proceedings of the 21st International Conference on World Wide Web, pages 699–708, Lyon, France. Gabriel Doyle and Michael C Frank. 2016. Investigating the sources of linguistic alignment in conversation. In Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics, pages 526–536, Berlin, Germany. Gabriel Doyle, Dan Yurovsky, and Michael C Frank. 2016. A robust framework for estimating linguistic alignment in Twitter conversations. In Proceedings of the 25th International Conference on World Wide Web, pages 637–648, Montreal, Canada. Riccardo Fusaroli, Bahador Bahrami, Karsten Olsen, Andreas Roepstorff, Geraint Rees, Chris Frith, and Kristian Tylén. 2012. Coming to terms quantifying the benefits of linguistic coordination. Psychological Science, 23(8):931–939. Dmitriy Genzel and Eugene Charniak. 2003. Variation of entropy and parse trees of sentences as a function of the sentence number. In Proceedings of the 2003 Conference on Empirical Methods in Natural Language Processing, pages 65–72. Association for Computational Linguistics. Howard Giles. 2008. Communication accommodation theory. In L. A. Baxter and D. O. Braithewaite, editors, Engaging theories in interpersonal communication: Multiple perspectives, pages 161–173. Sage, Thousand Oaks, CA. Patrick G. T. Healey, Matthew Purver, and Christine Howes. 2014. Divergence in dialogue. PLOS ONE, 9(6):1–6. Thomas M. Houslay. 2014. Understanding 3way interactions between continuous variables. https://tomhouslay.com/2014/03/21/ understanding-3-way-interactionsbetween-continuous-variables/. T Florian Jaeger and Neal E Snider. 2013. Alignment as a consequence of expectation adaptation: Syntactic priming is affected by the prime’s prediction error given both prior and recent experience. Cognition, 127(1):57–83. Elizabeth Jones, Cynthia Gallois, Victor Callan, and Michelle Barker. 1999. Strategies of accommodation: Development of a coding system for conversational interaction. Journal of Language and Social Psychology, 18(2):123–151. Simon Jones, Rachel Cotterill, Nigel Dewdney, Kate Muir, and Adam Joinson. 2014. Finding zelig in 610 text: A measure for normalizing linguistic accommodation. In 25th International Conference on Computational Linguistics, Bath, UK. Mary J Lindstrom and Douglas M Bates. 1990. Nonlinear mixed effects models for repeated measures data. Biometrics, pages 673–687. Peter McCullagh. 1984. Generalized linear models. European Journal of Operational Research, 16(3):285–292. Mark Myslín and Roger Levy. 2016. Comprehension priming as rational expectation for repetition: Evidence from syntactic processing. Cognition, 147:29–56. Kate G Niederhoffer and James W Pennebaker. 2002. Linguistic style matching in social interaction. Journal of Language and Social Psychology, 21(4):337– 360. Bill Noble and Raquel Fernández. 2015. Centre stage: How social network position shapes linguistic coordination. In Proceedings of CMCL 2015, pages 29– 38, Denver, CO. Robert M O’brien. 2007. A caution regarding rules of thumb for variance inflation factors. Quality & Quantity, 41(5):673–690. James W Pennebaker, Martha E Francis, and Roger J Booth. 2001. Linguistic inquiry and word count: LIWC 2001. Mahway: Lawrence Erlbaum Associates, 71:2001. Martin J Pickering and Simon Garrod. 2004. Toward a mechanistic psychology of dialogue. Behavioral and Brain Sciences, 27(02):169–190. Martin J Pickering and Simon Garrod. 2007. Do people use language production to make predictions during comprehension? Trends in cognitive sciences, 11(3):105–110. David Reitter, Frank Keller, and Johanna D Moore. 2011. A computational cognitive model of syntactic priming. Cognitive Science, 35(4):587–637. David Reitter and Johanna D Moore. 2014. Alignment and task success in spoken dialogue. Journal of Memory and Language, 76:29–46. Yafei Wang, David Reitter, and John Yen. 2014. Linguistic adaptation in conversation threads: Analyzing alignment in online health communities. In Proceedings of Cognitive Modeling and Computational Linguistics. Workshop at the Annual Meeting of the Association for Computational Linguistics. Michael Willemyns, Cynthia Gallois, Victor Callan, and J Pittam. 1997. Accent accommodation in the employment interview. Journal of Language and Social Psychology, 15(1):3–22. Yang Xu and David Reitter. 2015. An evaluation and comparison of linguistic alignment measures. In Proceedings of Cognitive Modeling and Computational Linguistics (CMCL), pages 58–67, Denver, DO. Yang Xu and David Reitter. 2016a. Convergence of syntactic complexity in conversation. In Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics (Volume 2: Short Papers), pages 443–448, Berlin, Germany. Yang Xu and David Reitter. 2016b. Entropy converges between dialogue participants: Explanations from an information-theoretic perspective. In Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 537–546, Berlin, Germany. Yang Xu and David Reitter. 2018. Information density converges in dialogue: Towards an informationtheoretic model. Cognition, 170:147–163.
2018
56
Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Long Papers), pages 611–620 Melbourne, Australia, July 15 - 20, 2018. c⃝2018 Association for Computational Linguistics 611 TutorialBank: A Manually-Collected Corpus for Prerequisite Chains, Survey Extraction and Resource Recommendation Alexander R. Fabbri Irene Li Prawat Trairatvorakul Yijiao He Wei Tai Ting Robert Tung Caitlin Westerfield Dragomir R. Radev Department of Computer Science, Yale University {alexander.fabbri,irene.li,prawat.trairatvorakul,yijiao.he, robert.tung,weitai.ting,caitlin.westerfield,dragomir.radev}@yale.edu Abstract The field of Natural Language Processing (NLP) is growing rapidly, with new research published daily along with an abundance of tutorials, codebases and other online resources. In order to learn this dynamic field or stay up-to-date on the latest research, students as well as educators and researchers must constantly sift through multiple sources to find valuable, relevant information. To address this situation, we introduce TutorialBank, a new, publicly available dataset which aims to facilitate NLP education and research. We have manually collected and categorized over 6,300 resources on NLP as well as the related fields of Artificial Intelligence (AI), Machine Learning (ML) and Information Retrieval (IR). Our dataset is notably the largest manually-picked corpus of resources intended for NLP education which does not include only academic papers. Additionally, we have created both a search engine 1 and a command-line tool for the resources and have annotated the corpus to include lists of research topics, relevant resources for each topic, prerequisite relations among topics, relevant subparts of individual resources, among other annotations. We are releasing the dataset and present several avenues for further research. 1 Introduction NLP has seen rapid growth over recent years. A Google search of “Natural Language Processing” returns over 100 million hits with papers, tutorials, 1http://aan.how blog posts, codebases and other related online resources. Additionally, advances in related fields such as Artificial Intelligence and Deep Learning are strongly influencing current NLP research. With these developments, an increasing number of tutorials and online references are being published daily. As a result, the task of students, educators and researchers of tracking the changing landscape in this field has become increasingly difficult. Recent work has studied the educational aspect of mining text for presenting scientific topics. One goal has been to develop concept maps of topics, graphs showing which topics are prerequisites for learning a given topic (Gordon et al., 2016; Liu et al., 2016; Pan et al., 2017a,b; Liang et al., 2017). Another goal has been to automatically create reading lists for a subject either by building upon concept graphs (Gordon et al., 2017) or through an unstructured approach (Jardine, 2014). Additionally, other work has aimed to automatically summarize scientific topics, either by extractively summarizing academic papers (Jha et al., 2013, 2015; Jaidka et al., 2016) or by producing Wikipedia articles on these topics from multiple sources (Sauper and Barzilay, 2009; Liu et al., 2018). Scientific articles constitute primary texts which describe an author’s work on a particular subject, while Wikipedia articles can be viewed as tertiary sources which summarize both results from primary works as well as explanations from secondary sources. Tang and McCalla (2004, 2009) and Sheng et al. (2017) explore the pedagogical function among the types of sources. To address the problem of the scientific education of NLP more directly, we focus on the annotation and utilization of secondary sources presented in a manner immediately useful to the NLP community. We introduce the TutorialBank corpus, a manually-collected dataset of links to over 612 6,300 high-quality resources on NLP and related fields. The corpus’s magnitude, manual collection and focus on annotation for education in addition to research differentiates it from other corpora. Throughout this paper we use the general term “resource” to describe any tutorial, research survey, blog post, codebase or other online source with a focus on educating on a particular subject. We have created a search engine for these resources and have annotated them according to a taxonomy to facilitate their sharing. Additionally, we have annotated for pedagogical role, prerequisite relations and relevance of resources to hand-selected topics and provide a command-line interface for our annotations. Our main contribution is the manual collection of good quality resources related to NLP and the annotation and presentation of these resources in a manner conducive to NLP education. Additionally, we show initial work on topic modeling and resource recommendation. We present a variant of standard reading-list generation which recommends resources based on a title and abstract pair and demonstrate additional uses and research directions for the corpus. 2 Related Work 2.1 Pedagogical Value of Resources Online resources are found in formats which vary in their roles in education. Sheng et al. (2017) identify seven types of pedagogical roles found in technical works: Tutorial, Survey, Software Manual, Resource, Reference Work, Empirical Results, and Other. They annotate a dataset of over 1,000 resources according to these types. Beyond these types, resources differ in their pedagogical value, which they define as “the estimate of how useful a document is to an individual who seeks to learn about specific concepts described in the document”. Tang and McCalla (2004, 2009) discuss the pedagogical value of a single type, academic papers, in relation to a larger recommendation system. 2.2 Prerequisite Chains Prerequisite chains refer to edges in a graph describing which topics are dependent on the knowledge of another topic. Prerequisite chains play an important role in curriculum planning and reading list generation. Liu et al. (2016) propose “Concept Graph Learning” in order to induce a graph from which they can predict prerequisite relations among university courses. Their framework consists of two graphs: (1) a higher-level graph which consists of university courses and (2) a lowerlevel graph which consists of induced concepts and pair-wise sequential preferences in learning or teaching the concept. Liang et al. (2017) experiment with prerequisite chains on education data but focus on the recovery of a concept graph rather than on predicting unseen course relations as in Liu et al. (2016). They introduce both a synthetic dataset as well as one scraped from 11 universities which includes course prerequisites as well as conceptprerequisite labels. Concept graphs are also used in (Gordon et al., 2016) to address the problem of developing reading lists for students. The concept graph in this case is a labeled graph where nodes represent both documents and concepts (determined using Latent Dirichlet Allocation (LDA) (Blei et al., 2003)), and edges represent dependencies. They propose methods based on cross entropy and information flow for determining edges in the graph. Finally, finding prerequisite relationships has also been used in other contexts such as Massive Open Online Courses (MOOCs) (Pan et al., 2017a,b). 2.3 Reading List Generation Jardine (2014) generates recommended reading lists from a corpus of technical papers in an unstructured manner in which a topic model weighs the relevant topics and relevant papers are chosen through his ThemedPageRank approach. He also provides a set of expert-generated reading lists. Conversely, Gordon et al. (2017) approach reading list generation from a structured perspective, first generating a concept graph from the corpus and then traversing the graph to select the most relevant document. 2.4 Survey Extraction Recent work on survey generation for scientific topics has focused on creating summaries from academic papers (Jha et al., 2013, 2015; Jaidka et al., 2016). Jha et al. (2013) present a system that generates summaries given a topic keyword. From a base corpus of papers found by query matching, they expand the corpus via a citation network using a heuristic called Restricted Expansion. This process is repeated for seven standard NLP topics. In a similar manner, Jha et al. (2015) experiment with fifteen topics in computational linguistics and 613 collect at least surveys written by experts on each topic, also making use of citation networks to expand their corpus. They introduce a content model as well as a discourse model and perform a qualitative comparisons of coherence with a standard summarization model. The task of creating surveys for specified topics has also been viewed in the multi-document summarization setting of generating Wikipedia articles (Sauper and Barzilay, 2009; Liu et al., 2018). Sauper and Barzilay (2009) induce domain-specific templates from Wikipedia and fill these templates with content from the Internet. More recently Liu et al. (2018) explore a diverse set of domains for summarization and are the first to attempt abstractive summarization of the first section of Wikipedia articles, by combining extractive and abstractive summarization methods. 3 Dataset Collection 3.1 An Overview of TutorialBank As opposed to other collections like the ACL Anthology (Bird et al., 2008; Radev et al., 2009, 2013, 2016), which contain solely academic papers, our corpus focuses mainly on resources other than academic papers. The main goal in our decision process of what to include in our corpus has been the quality-control of resources which can be used for an educational purpose. Initially, the resources collected were conference tutorials as well as surveys, books and longer papers on broader topics, as these genres contain an inherent amount of quality-control. Later on, other online resources were added to the corpus, as explained below. Student annotators, described later on, as well as the professor examined resources which they encountered in their studies. The resources were added to the corpus if deemed of good quality. Important to note is that not all resources which were found on the Internet were added to TutorialBank; one could scrape the web according to search terms, but quality control of the results would be largely missing. The quality of a resource is a somewhat subjective measure, but we aimed to find resources which would serve a pedagogical function to either students or researchers, with a professor of NLP making the final decision. This collection of resources and meta-data annotation has been done over multiple years, while this year we created the search engine and added additional annotations mentioned below. 1 - Introduction and Linguistics 2 - Language Modeling, Syntax and Parsing 3 - Semantics and Logic 4 - Pragmatics, Discourse, Dialogue and Applications 5 - Classification and Clustering 6 - Information Retrieval and Topic Modeling 7 - Neural Networks and Deep Learning 8 - Artificial Intelligence 9 - Other Topics Table 1: Top-level Taxonomy Topics Topic Category Count Introduction to Neural Networks and Deep Learning 635 Tools for Deep Learning 475 Miscellaneous Deep Learning 287 Machine Learning 225 Word Embeddings 139 Recurrent Neural Networks 134 Python Basics 133 Reinforcement learning 132 Convolutional Neural Networks 129 Introduction to AI 89 Table 2: Corpus count by taxonomy topic for the most frequent topics (excluding topic “Other”). 3.1.1 TutorialBank Taxonomy In order to facilitate the sharing of resources about NLP, we developed a taxonomy of 305 topics of varying granularity. The top levels of our taxonomy tree are shown in Table 1. The backbone of our Taxonomy corresponds to the syllabus of a university-level NLP course and was expanded to include related topics from other courses in ML, IR and AI. As a result, there is a bias in the corpus towards NLP resources and resources from other fields in so far as they are relevant to NLP. However, this bias is planned, as our focus remains teaching NLP. The resource count for the most frequent taxonomy topics is shown in Table 2. 3.2 Data Preprocessing For each resource in the corpus, we downloaded the corresponding PDF, PowerPoint presentations and other source formats and used PDFBox to perform OCR in translating the files to textual format. For HTML pages we downloaded both the raw HTML with all images as well as a formatted text version of the pages. For copyright purposes we release only the meta data such as urls and annotations and provide scripts for reproducing the dataset. 614 Resource Category Count corpus 131 lecture 126 library 1014 link set 1186 naclo 154 paper 1176 survey 390 tutorial 2079 Table 3: Corpus count by pedagogical feature. 4 Dataset Annotation Annotations were performed by a group of 3 PhD students in NLP, and 6 undergraduate Computer Science students who have taken at least one course in AI or NLP. 4.1 Pedagogical Function When collecting resources from the Internet, each item was labeled according to the medium in which it was found, analogous to the pedagogical function of (Sheng et al., 2017). We will use this term throughout the paper to describe this categorization. The categories along with their counts are shown in Table 3: • Corpus: A corpus provides access to and a description of a scientific dataset. • Lecture: A lecture consists of slides/notes from a university lecture. • Library: A library consists of github pages and other codebases which aid in the implementation of algorithms. • NACLO: NACLO problems refer to linguistics puzzles from the North American Computational Linguistics Olympiad. • Paper: A paper is a short/long conference paper taken from sites such as https://arxiv.org/ and which is not included in the ACL Anthology. • Link set: A link set provides a collection of helpful links in one location. • Survey: A survey is a long paper or book which describes a broader subject. • Tutorial: A tutorial is a slide deck from a conference tutorial or an HTML page that describes a contained topic. 4.2 Topic to Resource Collection We first identified by hand 200 potential topics for survey generation in the fields of NLP, ML, AI and Capsule Networks Domain Adaptation Document Representation Matrix factorization Natural language generation Q Learning Recursive Neural Networks Shift-Reduce Parsing Speech Recognition Word2Vec Table 4: Random sample of the list of 200 topics used for prerequisite chains, readling lists and survey extraction. IR. Topics were added according to the following criteria: 1. It is conceivable that someone would write a Wikipedia page on this topic (an actual page may or may not exist). 2. The topic is not overly general (e.g., “Natural Language Processing”) or too obscure or narrow. 3. In order to write a survey on the topic, one would need to include information from a number of sources. While some of the topics come from our taxonomy, many of the taxonomy topics have a different granularity than we desired, which motivated our topic collection. Topics were added to the list along with their corresponding Wikipedia pages, if they exist. A sample of the topics selected is shown in 4. Once the list of topics was compiled, annotators were assigned topics and asked to search that topic in the TutorialBank search engine and find relevant resources. In order to impose some uniformity on the dataset, we chose to only include resources which consisted of PowerPoint slides as well as HTML pages labeled as tutorials. We divided the topics among the annotators and asked them to choose five resources per topic using our search engine. The resource need not solely focus on the given topic; the resource may be on a more general topic and include a section on the given topic. As in general searching for resources, often resources include related information, so we believe this setting is fitting. For some topics the annotators chose fewer than five resources (partially due to the constraint we impose on the form of the resources). We noted topics for which no resources were found, and rather 615 than replace the topics to reflect TutorialBank coverage, we leave these topics in and plan to add additional resources in a future release. 4.3 Prerequisite Chains Even with a collection of resources and a list of topics, a student may not know where to begin studying a topic of interest. For example, in order to understand sentiment analysis the student should be familiar with Bayes’ Theorem, the basics of ML as well as other topics. For this purpose, the annotators annotated which topics are prerequisites of others for the given topics from their reading lists. We expanded our list of potential prerequisites to include eight additional topics which were too broad for survey generation (e.g., Linear Algebra) but which are important prerequisites to capture. Following the method of (Gordon et al., 2016), we define labeling a topic Y as a prerequisite of X according to the following question: • Would understanding Topic Y help you to understand Topic X? As in (Gordon et al., 2016), the annotators can answer this question as “no”, “somewhat” or “yes.” 4.4 Reading Lists When annotators were collecting relevant resources for a particular topic, we asked them to order the resources they found in terms of the usefulness of the resource for learning that particular topic. We also include the Wikipedia pages corresponding to the topics, when available, as an additional source of information. We do not perform additional annotation of the order of the resources or experiment in automatically reproducing these ordered lists but rather offer this annotation as a pedagogical tool for students and educators. We plan the expansion of these lists and analysis in future experiments. 4.5 Survey Extraction We frame the task of creating surveys of scientific topics as a document retrieval task. A student searching for resources in order to learn about a topic such as Recurrent Neural Networks (RNN’s) may encounter resources 1) which solely cover RNN’s as well as 2) resources which cover RNN’s within the context of a larger topic (e.g., Deep Learning). Within the first type, not every piece of content (a single PowerPoint slide or section in a blog post) contributes equally well to an understanding of RNN’s; the content may focus on background information or may not clearly explain the topic. Within the second type, larger tutorials may contain valuable information on the topic, but may also contain much information not immediately relevant to the query. Given a query topic and a set of parsed documents we want to retrieve the parts most relevant to the topic. In order to prepare the dataset for extracting surveys of topics, we first divide resources into units of content which we call “cards”. PowerPoint slides inherently contain a division in the form of each individual slide, so we divide PowerPoint presentations into individual slides/cards. For HTML pages, the division is less clear. However, we convert the HTML pages to a markdown file and then automatically split the markdown file using header markers. We believe this is a reasonable heuristic as tutorials and similar content tend to be broken up into sections signalled by headers. For each of the resources which the annotators gathered for the reading lists on a given topic, that same annotator was presented with each card from that resource and asked to rate the usefulness of the card. The annotator could rate the card from 0-2, with 0 meaning the card is not useful for learning the specified topic, 1 meaning the card is somewhat useful and 2 meaning the card is useful. We chose a 3-point scale as initial trials showed a 5-point scale to be too subjective. The annotators also had the option in our annotation interface to drop cards which were parsed incorrectly or were repeated one after the other as well as skip cards and return to score a card. 4.6 Illustrations Whether needed for understanding a subject more deeply or for preparing a blog post on a subject, images play an important role in presenting concepts more concretely. Simply extracting the text from HTML pages leaves behind this valuable information, and OCR software often fails to parse complex graphs and images in a non-destructive fashion. To alleviate this problem and promote the sharing of images, we extracted all images from our collected HTML pages. Since many images were simply HTML icons and other extraneous images, we manually checked the images and selected those which are of value to the NLP student. We collected a total of 2,000 images and matched them with the taxonomy topic name of the resource it came from as well as the url of the resource. While we cannot outdo the countless im616 ages from Google search, we believe illustrations can be an additional feature of our search engine, and we describe an interface for this collection below. 5 Additional Features and Analysis 5.1 Search Engine In order to present our corpus in a user-friendly manner, we created a search engine using Apache Lucene2. We allow the user to query key words to search our resource corpus, and the results can then be sorted based on relevance, year, topic, medium, and other meta data. In addition to searching by term, users can browse the resources by topic according to our taxonomy. For each child topic from the top-level taxonomy downward, we display resources according to their pedagogical functions. In addition to searching for general resources, we also provide search functionality for a corpus of papers, where the user can search by keyword as well as by author and venue. While the search engine described above provides access to our base corpus and meta data, we also provide a command-line interface tool with our release so that students and researchers can easily use our annotations for prerequisite topics, illustrations and survey generation for educational purposes. The tool allows the user to input a topic from the taxonomy and retrieve all images related to that topic according to our meta data. Additionally, the user can input a topic from our list of 200 topics, and our tool outputs the prerequisites of that topic according to our annotation as well as the cards labelled as relevant for that topic. 5.2 Resource Recommendation from Title and Abstract Pairs In addition to needing to search for a general term, often a researcher begins with an idea for a project which is already focused on a nuanced sub-task. An employee at an engineering company may be starting a project on image captioning. Ideas about the potential direction of this project may be clear, but what resources may be helpful or what papers have already been published on the subject may not be immediately obvious. To this end we propose the task of recommending resources from title and abstract pairs. The employee will input the title and abstract of the project and obtain a list of resources which can help complete the project. 2http://lucene.apache.org/ This task is analogous to reproducing the reference section of a paper, however, with a focus on tutorials and other resources rather than solely on papers. As an addition to our search engine, we allow a user to input a title and an abstract of variable length. We then propose taxonomy topics based on string matches with the query as well as a list of resources and papers and their scores as determined by the search engine. We later explore two baseline models for recommending resources based on document and topic modeling. 5.3 Dataset and Annotation Statistics We created reading lists for 182 of the 200 topics we identify in Section 4.2. Resources were not found for 18 topics due to the granularity of the topic (e.g., Radial Basis Function Networks) as well as our intended restriction of the chosen resources to PowerPoint presentations and HTML pages. The average number of resources per reading list for the 182 topics is 3.94. As an extension to the reading lists we collected Wikipedia pages for 184 of the topics and present these urls as part of the dataset. We annotated prerequisite relations for the 200 topics described above. We present a subset of our annotations in Figure 1, which shows the network of topic relations (nodes without incoming edges were not annotated for their prerequisites as part of this shown inter-annotation round). Our network consists of 794 unidirectional edges and 33 bidirectional edges. The presence of bidirectional edges stems from our definition of a prerequisite, which does not preclude bidirectionality (one topic can help explain another and viceversa) as well as the similarity of the topics. The set of bidirectional edges consists of topic pairs (BLEU - ROUGE; Word Embedding - Distributional Semantics; Backpropagation - Gradient descent) which could be collapsed into one topic to create a directed acyclic graph in the future. For survey extraction, we automatically split 313 resources into content cards which we annotated for usefulness in survey extraction. These resources are a subset of the reading lists limited in number due to constraints in downloading urls and parsing to our annotation interface. The total number of cards which were not marked as repeats/mis-parsed totals 17,088, with 54.59 per resource. 6,099 cards were labeled as somewhat relevant or relevant for the target topic. The resources marked as non-relevant may be poorly 617 Figure 1: Subset of prerequisite annotations taken from inter-annotator agreement round. Annotation Kappa Pedagogical Function 0.69 Prerequisites 0.30 Survey Extraction 0.33 Table 5: Inter-annotator agreement. presented or may not pertain fully to the topic of that survey. These numbers confirm the appropriateness of this survey corpus as a non-trivial information retrieval task. To better understand the difficulty of our annotation tasks, we performed inter-annotator agreement experiments for each of our annotations. We randomly sampled twenty-five resources and had annotators label for pedagogical function. Additionally, we sampled twenty-five topics for prerequisite annotations and five topics with reading list lengths of five for survey annotation. We used Fleiss’s Kappa (Fleiss et al., 2004), a variant of Cohen’s Kappa (Cohen, 1960) designed to measure annotator agreement for more than two annotators. The results are shown in Table 5. Using the scale as defined in Landis and Koch (1977), pedagogical function annotation exhibits substantial agreement while prerequisite annotation and survey extraction annotation show fair agreement. The Kappa score for pedagogical function is comparable to that of Sheng et al. (2017) (0.68) while the prerequisite annotation is slightly lower than the agreement metric used in Gordon et al. (2016) (0.36) although they measure agreement through Pearson correlation. We believe that the sparsity of the labels plays a role in these scores. 5.4 Comparison to Similar Datasets Our corpus distinguishes itself in its magnitude, manual collection and focus on annotation for educational purposes in addition to research tasks. We use similar categories for classifying pedagogical function as Sheng et al. (2017), but our corpus is hand-picked and over four-times larger, while exhibiting similar annotation agreement. Gordon et al. (2016) present a corpus for prerequisite relations among topics, but this corpus differs in coverage. They used LDA topic modeling to generate a list of 300 topics, while we manually create a list of 200 topics based on criteria described above. Although their topics are generated from the ACL Anthology and related to NLP, we find less than a 40% overlap in topics. Additionally, they only annotate a subset of the topics for prerequisite annotations while we focus on broad coverage, annotating two orders of magnitude larger in terms of prerequisite edges while exhibiting fair inter-annotator agreement. Previous work and datasets on generating surveys for scientific topics have focused on scientific articles (Jha et al., 2013, 2015; Jaidka et al., 2016) and Wikipedia pages (Sauper and Barzilay, 2009; Liu et al., 2018) as a summarization task. We, on the other hand, view this problem as an information retrieval task and focus on extracting content from manually-collected PowerPoint slides and online tutorials. Sauper and Barzilay (2009) differ in their domain coverage, and while the surveys of Jha et al. (2013, 2015) focus on NLP, we collect resources for an order of magnitude larger set of topics. Finally, our focus here in creating surveys, as well as the other annotations, is first and foremost to create a useful tool for students and researchers. Websites such as the ACL Anthology3 and arXiv4 provide an abundance of resources, but do not focus on the pedagogical aspect of their content. Meanwhile, websites such as Wikipedia which aim to create a survey of a topic may not reflect the latest trends in rapidly changing fields. 6 Topic Modeling and Resource Recommendation As an example usage of our corpus, we experimented with topic modeling and its extension to 3http://aclweb.org/anthology/ 4https://arxiv.org/ 618 Figure 2: Plot showing a query document with title “Statistical language models for IR” and its neighbour document clusters as obtained through tSNE dimension reduction for Doc2Vec (left) and LDA topic modeling (right). Nearest neighbor documents titles are shown to the right of each plot. resource recommendation. We restricted our corpus for this study to non-HTML files to examine the single domain of PDF’s and PowerPoint presentations. This set consists of about 1,480 files with a vocabulary size 191,446 and a token count of 9,134,452. For each file, the tokens were processed, stop tokens were stripped, and then each token was stemmed. Words with counts less than five across the entire corpus were dropped. We experimented with two models: LDA, a generative probabilistic model mentioned earlier, and Doc2Vec (Le and Mikolov, 2014), an extension of Word2Vec (Mikolov et al., 2013) which creates representations of arbitrarily-sized documents. Figure 2 shows the document representations obtained with Doc2Vec as well as the topic clusters created with LDA. The grouping of related resources around a point demonstrates the clustering abilities of these models. We applied LDA in an unsupervised way, using 60 topics over 300 iterations as obtained through experimentation, and then colored each document dot with its category to observe the distribution. Our Doc2Vec model used hidden dimension 300, a window size of 10 and a constant learning rate of 0.025. Then, the model was trained for 10 epochs. We tested these models for the task of resource recommendation from title+abstract pairs. We collected 10 random papers from ACL 2017. For LDA, the document was classified to a topic, and then the top resources from that topic were chosen, while Doc2Vec computed the similarity between the query document and the training set and chose the most similar documents. We concatenated the title and abstract as input and had our models predict the top 20 documents. We then had five annotators rate the recommendations for 1 2 3 4 5 6 7 8 9 10 0 0.2 0.4 0.6 0.8 1 Test Document ID Percent of relevant recommendations Doc2Vec LDA Figure 3: Relevance accuracies of the Doc2Vec and LDA resource recommendation models. helpfulness as 0 (not helpful) or 1 (helpful). Recommended resources were rated according to the criterion of whether reading this resource would be useful in doing a project as described in the title and abstract. The results are found in Figure 3. Averaging the performance over each test case, the LDA model performed better than Doc2Vec (0.45 to 0.34), although both leave large room for improvements. LDA recommended resources notably better for cases 5 and 6, which correspond to papers with very well defined topics areas (Question Answering and Machine Translation) while Doc2Vec was able to find similar documents for cases 2 and 8 which are a mixture of topics, yet are well-represented in our corpus (Reinforcement Learning with dialog agents and emotion (sentiment) detection with classification). The low performance for both models also corresponds to differences in corpus coverage, and we plan to explore this bias in the future. We believe that this variant of reading list generation as well as the relationship between titles and abstracts is an unexplored and exciting area for future research. 619 7 Conclusion and Future Work In this paper we introduce the TutorialBank Corpus, a collection of over 6,300 hand-collected resources on NLP and related fields. Our corpus is notably larger than similar datasets which deal with pedagogical resources and topic dependencies and unique in use as an educational tool. To this point, we believe that this dataset, with its multiple layers of annotation and usable interface, will be an invaluable tool to the students, educators and researchers of NLP. Additionally, the corpus promotes research on tasks not limited to pedagogical function classification, topic modeling and prerequisite relation labelling. Finally, we formulate the problem of recommending resources for a given title and abstract pair as a new way to approach reading list generation and propose two baseline models. For future work we plan to continue the collection and annotation of resources and to separately explore each of the above research tasks. Acknowledgments We would like to thank all those who worked on the development of the search engine and website as well as those whose discussion and annotated greatly helped this work, especially Jungo Kasai, Alexander Strzalkowski, Michihiro Yasunaga and Rui Zhang. References Steven Bird, Robert Dale, Bonnie J. Dorr, Bryan R. Gibson, Mark Thomas Joseph, Min-Yen Kan, Dongwon Lee, Brett Powley, Dragomir R. Radev, and Yee Fan Tan. 2008. The ACL Anthology Reference Corpus: A Reference Dataset for Bibliographic Research in Computational Linguistics. In LREC. European Language Resources Association. David M. Blei, Andrew Y. Ng, and Michael I. Jordan. 2003. Latent Dirichlet Allocation. Journal of Machine Learning Research, 3:993–1022. Jacob Cohen. 1960. A Coefficient of Agreement for Nominal Scales. Educational and psychological measurement, 20(1):37–46. Joseph L. Fleiss, Bruce Levin, and Myunghee Cho Paik. 2004. The Measurement of Interrater Agreement. John Wiley & Sons, Inc. Jonathan Gordon, Stephen Aguilar, Emily Sheng, and Gully Burns. 2017. Structured Generation of Technical Reading Lists. In BEA@EMNLP, pages 261– 270. Association for Computational Linguistics. Jonathan Gordon, Linhong Zhu, Aram Galstyan, Prem Natarajan, and Gully Burns. 2016. Modeling Concept Dependencies in a Scientific Corpus. In Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics, ACL 2016, August 7-12, 2016, Berlin, Germany, Volume 1: Long Papers. Kokil Jaidka, Muthu Kumar Chandrasekaran, Sajal Rustagi, and Min-Yen Kan. 2016. Overview of the Cl-SciSumm 2016 Shared Task. In BIRNDL@JCDL, volume 1610 of CEUR Workshop Proceedings, pages 93–102. CEUR-WS.org. James Gregory Jardine. 2014. Automatically Generating Reading Lists. Ph.D. thesis, University of Cambridge, UK. Rahul Jha, Amjad Abu-Jbara, and Dragomir R. Radev. 2013. A System for Summarizing Scientific Topics Starting from Keywords. In Proceedings of the 51st Annual Meeting of the Association for Computational Linguistics, ACL 2013, 4-9 August 2013, Sofia, Bulgaria, Volume 2: Short Papers, pages 572– 577. Rahul Jha, Reed Coke, and Dragomir R. Radev. 2015. Surveyor: A System for Generating Coherent Survey Articles for Scientific Topics. In Proceedings of the Twenty-Ninth AAAI Conference on Artificial Intelligence, January 25-30, 2015, Austin, Texas, USA., pages 2167–2173. J Richard Landis and Gary G Koch. 1977. The Measurement of Observer Agreement for Categorical Data. Biometrics, pages 159–174. Quoc V. Le and Tomas Mikolov. 2014. Distributed Representations of Sentences and Documents. CoRR, abs/1405.4053. Chen Liang, Jianbo Ye, Zhaohui Wu, Bart Pursel, and C. Lee Giles. 2017. Recovering Concept Prerequisite Relations from University Course Dependencies. In Proceedings of the Thirty-First AAAI Conference on Artificial Intelligence, February 4-9, 2017, San Francisco, California, USA., pages 4786– 4791. Hanxiao Liu, Wanli Ma, Yiming Yang, and Jaime G. Carbonell. 2016. Learning Concept Graphs from Online Educational Data. J. Artif. Intell. Res., 55:1059–1090. Peter J. Liu, Mohammad Saleh, Etienne Pot, Ben Goodrich, Ryan Sepassi, Lukasz Kaiser, and Noam Shazeer. 2018. Generating Wikipedia by Summarizing Long Sequences. International Conference on Learning Representations. Tomas Mikolov, Kai Chen, Greg Corrado, and Jeffrey Dean. 2013. Efficient Estimation of Word Representations in Vector Space. CoRR, abs/1301.3781. 620 Liangming Pan, Chengjiang Li, Juanzi Li, and Jie Tang. 2017a. Prerequisite Relation Learning for Concepts in MOOCs. In Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics, ACL 2017, Vancouver, Canada, July 30 August 4, Volume 1: Long Papers, pages 1447–1456. Liangming Pan, Xiaochen Wang, Chengjiang Li, Juanzi Li, and Jie Tang. 2017b. Course Concept Extraction in MOOCs via Embedding-Based Graph Propagation. In IJCNLP(1), pages 875–884. Asian Federation of Natural Language Processing. Dragomir R. Radev, Mark Thomas Joseph, Bryan R. Gibson, and Pradeep Muthukrishnan. 2016. A Bibliometric and Network Analysis of the Field of Computational Linguistics. JASIST, 67(3):683–706. Dragomir R. Radev, Pradeep Muthukrishnan, and Vahed Qazvinian. 2009. The ACL Anthology Network Corpus. In Proceedings, ACL Workshop on Natural Language Processing and Information Retrieval for Digital Libraries, Singapore. Dragomir R. Radev, Pradeep Muthukrishnan, Vahed Qazvinian, and Amjad Abu-Jbara. 2013. The ACL Anthology Network Corpus. Language Resources and Evaluation, 47(4):919–944. Christina Sauper and Regina Barzilay. 2009. Automatically Generating Wikipedia Articles: A StructureAware Approach. In ACL 2009, Proceedings of the 47th Annual Meeting of the Association for Computational Linguistics and the 4th International Joint Conference on Natural Language Processing of the AFNLP, 2-7 August 2009, Singapore, pages 208– 216. Emily Sheng, Prem Natarajan, Jonathan Gordon, and Gully Burns. 2017. An Investigation into the Pedagogical Features of Documents. In BEA@EMNLP, pages 109–120. Association for Computational Linguistics. Tiffany Ya Tang and Gordon I. McCalla. 2004. On the Pedagogically Guided Paper Recommendation for an Evolving Web-Based Learning System. In FLAIRS Conference, pages 86–92. AAAI Press. Tiffany Ya Tang and Gordon I. McCalla. 2009. The Pedagogical Value of Papers: a CollaborativeFiltering based Paper recommender. J. Digit. Inf., 10(2).
2018
57
Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Long Papers), pages 621–631 Melbourne, Australia, July 15 - 20, 2018. c⃝2018 Association for Computational Linguistics 621 Give Me More Feedback: Annotating Argument Persuasiveness and Related Attributes in Student Essays Winston Carlile Nishant Gurrapadi Zixuan Ke Vincent Ng Human Language Technology Research Institute University of Texas at Dallas Richardson, TX 75083-0688 {winston,zixuan,vince}@hlt.utdallas.edu,[email protected] Abstract While argument persuasiveness is one of the most important dimensions of argumentative essay quality, it is relatively little studied in automated essay scoring research. Progress on scoring argument persuasiveness is hindered in part by the scarcity of annotated corpora. We present the first corpus of essays that are simultaneously annotated with argument components, argument persuasiveness scores, and attributes of argument components that impact an argument’s persuasiveness. This corpus could trigger the development of novel computational models concerning argument persuasiveness that provide useful feedback to students on why their arguments are (un)persuasive in addition to how persuasive they are. 1 Introduction The vast majority of existing work on automated essay scoring has focused on holistic scoring, which summarizes the quality of an essay with a single score and thus provides very limited feedback to the writer (see Shermis and Burstein (2013) for the state of the art). While recent attempts address this problem by scoring a particular dimension of essay quality such as coherence (Miltsakaki and Kukich, 2004), technical errors, relevance to prompt (Higgins et al., 2004; Persing and Ng, 2014), organization (Persing et al., 2010), and thesis clarity (Persing and Ng, 2013), argument persuasiveness is largely ignored in existing automated essay scoring research despite being one of the most important dimensions of essay quality. Nevertheless, scoring the persuasiveness of arguments in student essays is by no means easy. The difficulty stems in part from the scarcity of persuasiveness-annotated corpora of student essays. While persuasiveness-annotated corpora exist for other domains such as online debates (e.g., Habernal and Gurevych (2016a; 2016b)), to our knowledge only one corpus of persuasivenessannotated student essays has been made publicly available so far (Persing and Ng, 2015). Though a valuable resource, Persing and Ng’s (2015) (P&N) corpus has several weaknesses that limit its impact on automated essay scoring research. First, P&N assign only one persuasiveness score to each essay that indicates the persuasiveness of the argument an essay makes for its thesis. However, multiple arguments are typically made in a persuasive essay. Specifically, the arguments of an essay are typically structured as an argument tree, where the major claim, which is situated at the root of the tree, is supported by one or more claims (the children of the root node), each of which is in turn supported by one or more premises. Hence, each node and its children constitute an argument. In P&N’s dataset, only the persuasiveness of the overall argument (i.e., the argument represented at the root and its children) of each essay is scored. Hence, any system trained on their dataset cannot provide any feedback to students on the persuasiveness of any arguments other than the overall argument. Second, P&N’s corpus does not contain annotations that explain why the overall argument is not persuasive if its score is low. This is undesirable from a feedback perspective, as a student will not understand why her argument is not persuasive if its score is low. Our goal in this paper is to annotate and make publicly available a corpus of persuasive student essays that addresses the aforementioned weaknesses via designing appropriate annotation schemes and scoring rubrics. Specifically, not only do we score the persuasiveness of each ar622 gument in each essay (rather than simply the persuasiveness of the overall argument), but we also identify a set of attributes that can explain an argument’s persuasiveness and annotate each argument with the values of these attributes. These annotations enable the development of systems that can provide useful feedback to students, as the attribute values predicted by these systems can help a student understand why her essay receives a particular persuasiveness score. To our knowledge, this is the first corpus of essays that are simultaneously annotated with argument components, persuasiveness scores, and related attributes.1 2 Related Work While argument mining research has traditionally focused on determining the argumentative structure of a text document (i.e., identifying its major claim, claims, and premises, as well as the relationships between these argument components) (Stab and Gurevych, 2014b, 2017a; Eger et al., 2017), researchers have recently begun to study new argument mining tasks, as described below. Persuasiveness-related tasks. Most related to our study is work involving argument persuasiveness. For instance, Habernal and Gurevych (2016b) and Wei et al. (2016) study the persuasiveness ranking task, where the goal is to rank two internet debate arguments written for the same topic w.r.t. their persuasiveness. As noted by Habernal and Gurevych, ranking arguments is a relatively easier task than scoring an argument’s persuasiveness: in ranking, a system simply determines whether one argument is more persuasive than the other, but not how much more persuasive one argument is than the other; in scoring, however, a system has to determine how persuasive an argument is on an absolute scale. Note that ranking is not an acceptable evaluation setting for studying argument persuasiveness in the essay domain, as feedback for an essay has to be provided independently of other essays. In contrast, there are studies that focus on factors affecting argument persuasiveness in internet debates. For instance, Lukin et al. (2017) examine how audience variables (e.g., personalities) interact with argument style (e.g., factual vs. emotional arguments) to affect argument persuasive1Our annotated corpus and annotation manual are publicly available at the website http://www.hlt.utdallas.edu/∼zixuan/EssayScoring. ness. Persing and Ng (2017) identify factors that negatively impact persuasiveness, so their factors, unlike ours, cannot explain what makes an argument persuasive. Other argument mining tasks. Some of the attributes that we annotate our corpus with have been studied. For instance, Hidey et al. (2017) examine the different semantic types of claims and premises, whereas Higgins and Walker (2012) investigate persuasion strategies (i.e., ethos, pathos, logos). Unlike ours, these studies use data from online debate forums and social/environment reports. Perhaps more importantly, they study these attributes independently of persuasiveness. Several argument mining tasks have recently been proposed. For instance, Stab and Gurevych (2017b) examine the task of whether an argument is sufficiently supported. Al Khatib et al. (2016) identify and annotate a news editorial corpus with fine-grained argumentative discourse units for the purpose of analyzing the argumentation strategies used to persuade readers. Wachsmuth et al. (2017) focus on identifying and annotating 15 logical, rhetorical, and dialectical dimensions that would be useful for automatically accessing the quality of an argument. Most recently, the Argument Reasoning Comprehension task organized as part of SemEval 2018 has focused on selecting the correct warrant that explains reasoning of an argument that consists of a claim and a reason.2 3 Corpus The corpus we chose to annotate is composed of 102 essays randomly chosen from the Argument Annotated Essays corpus (Stab and Gurevych, 2014a). This collection of essays was taken from essayforum3, a site offering feedback to students wishing to improve their ability to write persuasive essays for tests. Each essay is written in response to a topic such as “should high school make music lessons compulsory?” and has already been annotated by Stab and Gurevych with an argument tree. Hence, rather than annotate everything from scratch, we annotate the persuasiveness score of each argument in the already-annotated argument trees in this essay collection as well as the attributes that potentially impact persuasiveness. Each argument tree is composed of three types of tree nodes that correspond to argument compo2https://competitions.codalab.org/competitions/17327 3www.essayforum.com 623 Essays: 102 Sentences: 1462 Tokens: 24518 Major Claims: 185 Claims: 567 Premises: 707 Support Relations: 3615 Attack Relations: 219 Table 1: Corpus statistics. nents. The three annotated argument component types include: MajorClaim, which expresses the author’s stance with respect to the essay’s topic; Claims, which are controversial statements that should not be accepted by readers without additional support; and Premises, which are reasons authors give to persuade readers about the truth of another argument component statement. The two relation types include: Support, which indicates that one argument component supports another, and Attack, which indicates that one argument component attacks another. Each argument tree has three to four levels. The root is a major claim. Each node in the second level is a claim that supports or attacks its parent (i.e., the major claim). Each node is the third level is a premise that supports or attacks its parent (i.e., a claim). There is an optional fourth level consisting of nodes that correspond to premises. Each of these premises either supports or attacks its (premise) parent. Stab and Gurevych (2014a) report high inter-annotator agreement on these annotations: for the annotations of major claims, claims, and premises, the Krippendorff’s α values (Krippendorff, 1980) are 0.77, 0.70, and 0.76 respectively, and for the annotations of support and attack relations, the α values are both 0.81. Note that Stab and Gurevych (2014a) determine premises and claims by their position in the argument tree and not by their semantic meaning. Due to the difficulty of treating an opinion as a nonnegotiable unit of evidence, we convert all subjective premises into claims to demonstrate that they are subjective and require backing. At the end of this process, several essays contain argument trees that violate the scheme used by Stab and Gurevych, due to some premises supported by opinion premises, now converted to claims. Although the ideal argument should not violate the canonical structure, students attempting to improve their persuasive writing skills may not understand this, and mistakenly support evidence with their own opinions. Statistics of this corpus are shown in Table 1. Its extensive use in argument mining research in recent years together with its reliably annotated argument trees makes it an ideal corpus to use for our annotation task. 4 Annotation 4.1 Definition Since persuasiveness is defined on an argument, in order to annotate persuasiveness we need to define precisely what an argument is. Following van Eemeren et al. (2014), we define an argument as consisting of a conclusion that may or may not be supported/attacked by a set of evidences. Given an argument tree, a non-leaf node can be interpreted as a “conclusion” that is supported or attacked by its children, which can therefore be interpreted as “evidences” for the conclusion. In contrast, a leaf node can be interpreted as an unsupported conclusion. Hence, for the purposes of our work, an argument is composed of a node in an argument tree and all of its children, if any. 4.2 Annotation Scheme Recall that the goal of our annotation is to score each argument w.r.t. its persuasiveness (see Table 2 for the rubric for scoring persuasiveness) and annotate each of its components with a set of predefined attributes that could impact the argument’s persuasiveness. Table 3 presents a summary of the attributes we annotate. The rest of this subsection describes these attributes. Each component type (MajorClaim, Claim, Premise) has a distinct set of attributes. All component types have three attributes in common: Eloquence, Specificity, and Evidence. Eloquence is how well the author uses language to convey ideas, similar to clarity and fluency. Specificity refers to the narrowness of a statement’s scope. Statements that are specific are more believable because they indicate an author’s confidence and depth of knowledge about a subject matter. Argument assertions (major claims and claims) need not be believable on their own since that is the job of the supporting evidence. The Evidence score describes how well the supporting components support the parent component. The rubrics for scoring Eloquence, Evidence, Claim/MajorClaim Specificity, and Premise Specificity are shown in Tables 4, 5, 6, and 7 respectively. MajorClaim Since the major claim represents the overall argument of the essay, it is in this component that we annotate the persuasive strategies employed (i.e., Ethos, Pathos and Logos). These 624 Score Description 6 A very strong, clear argument. It would persuade most readers and is devoid of errors that might detract from its strength or make it difficult to understand. 5 A strong, pretty clear argument. It would persuade most readers, but may contain some minor errors that detract from its strength or understandability. 4 A decent, fairly clear argument. It could persuade some readers, but contains errors that detract from its strength or understandability. 3 A poor, understandable argument. It might persuade readers who are already inclined to agree with it, but contains severe errors that detract from its strength or understandability. 2 It is unclear what the author is trying to argue or the argument is poor and just so riddled with errors as to be completely unpersuasive. 1 The author does not appear to make any argument (e.g. he may just describe some incident without explaining why it is important). It could not persuade any readers because there is nothing to be persuaded of. It may or may not contain detectable errors, but errors are moot since there is not an argument for them to interfere with. Table 2: Description of the Persuasiveness scores. Attribute Possible Values Applicability Description Specificity 1–5 MC,C,P How detailed and specific the statement is Eloquence 1–5 MC,C,P How well the idea is presented Evidence 1–6 MC,C,P How well the supporting statements support their parent Logos/Pathos/Ethos yes,no MC,C Whether the argument uses the respective persuasive strategy Relevance 1–6 C,P The relevance of the statement to the parent statement ClaimType value,fact,policy C The category of what is being claimed PremiseType see Section 4.2 P The type of Premise, e.g. statistics, definition, real example, etc. Strength 1–6 P How well a single statement contributes to persuasiveness Table 3: Summary of the attributes together with their possible values, the argument component type(s) each attribute is applicable to (MC: MajorClaim, C: Claim, P: Premise), and a brief description. Score Description 5 Demonstrates mastery of English. There are no grammatical errors that distract from the meaning of the sentence. Exhibits a well thought out, flowing sentence structure that is easy to read and conveys the idea exceptionally well. 4 Demonstrates fluency in English. If there are any grammatical or syntactical errors, their affect on the meaning is negligible. Word choice suggests a broad vocabulary. 3 Demonstrates competence in English. There might be a few errors that are noticeable but forgivable, such as an incorrect verb tense or unnecessary pluralization. Demonstrates a typical vocabulary and a simple sentence structure. 2 Demonstrates poor understanding of sentence composition and/or poor vocabulary. The choice of words or grammatical errors force the reader to reread the sentence before moving on. 1 Demonstrates minimal eloquence. The sentence contains errors so severe that the sentence must be carefully analyzed to deduce its meaning. Table 4: Description of the Eloquence scores. Score Description 6 A very strong, very persuasive argument body. There are many supporting components that have high Relevance scores. There may be a few attacking child components, but these components must be used for either concession or refuting counterarguments as opposed to making the argument indecisive or contradictory. 5 A strong, persuasive argument body. There are sufficient supporting components with respectable scores. 4 A decent, fairly persuasive argument body. 3 A poor, possibly persuasive argument body. 2 A totally unpersuasive argument body. 1 There is no argument body for the given component. Table 5: Description of the Evidence scores. three attributes are not inherent to the text identifying the major claim but instead summarize the child components in the argument tree. Claim The claim argument component possesses all of the attributes of a major claim in addition to a Relevance score and a ClaimType. In order for an argument to be persuasive, all supporting components must be relevant to the component that they support/attack. The scoring rubric for Relevance is shown in Table 8. The ClaimType can be value (e.g., something is good or bad, important or not important, etc.), fact (e.g. something 625 Score Description 5 The claim summarizes the argument well and has a qualifier that indicates the extent to which the claim holds true. Claims that summarize the argument well must reference most or all of the supporting components. 4 The claim summarizes the argument very well by mentioning most or all of the supporting components, but does not have a qualifier indicating the conditions under which the claim holds true. Alternatively, the claim may moderately summarize the argument by referencing a minority of supporting components and contain qualifier. 3 The claim has a qualifier clause or references a minority of the supporting components, but not both. 2 The claim does not make an attempt to summarize the argument nor does it contain a qualifier clause. 1 Simply rephrases the major claim or is outside scope of the major claim (argument components were annotated incorrectly: major claim could be used to support claim). Table 6: Description of the Claim and MajorClaim Specificity scores. Score Description 5 An elaborate, very specific statement. The statement contains numerical data, or a historical example from the real world. There is (1) both a sufficient qualifier indicating the extent to which the statement holds true and an explanation of why the statement is true, or (2) at least one real world example, or (3) a sufficient description of a hypothetical situation that would evoke a mental image of the situation in the minds of most readers. 4 A more specific statement. It is characterized by either an explanation of why the statement is true, or a qualifier indicating when/to what extent the statement is true. Alternatively, it may list examples of items that do not qualify as historical events. 3 A sufficiently specific statement. It simply states a relationship or a fact with little ambiguity. 2 A broad statement. A statement with hedge words and without other redeeming factors such as explicit examples, or elaborate reasoning. Additionally, there are few adjectives or adverbs. 1 An extremely broad statement. There is no underlying explanation, qualifiers, or real-world examples. Table 7: Description of the Premise Specificity scores. Score Description 6 Anyone can see how the support relates to the parent claim. The relationship between the two components is either explicit or extremely easy to infer. The relationship is thoroughly explained in the text because the two components contain the same words or exhibit coreference. 5 There is an implied relationship that is obvious, but it could be improved upon to remove all doubt. If the relationship is obvious, both relating components must have high Eloquence and Specificity scores. 4 The relationship is fairly clear. The relationship can be inferred from the context of the two statements. One component must have a high Eloquence and Specificity scores and the other must have lower but sufficient Eloquence and Specificity scores for the relationship to be fairly clear. 3 Somewhat related. It takes some thinking to imagine how the components relate. The parent component or the child component have low clarity scores. The two statements are about the same topic but unrelated ideas within the domain of said topic. 2 Mostly unrelated. It takes some major assumptions to relate the two components. A component may also receive this score if both components have low clarity scores. 1 Totally unrelated. Very few people could see how the two components relate to each other. The statement was annotated to show that it relates to the claim, but this was clearly in error. Table 8: Description of the Relevance scores. is true or false), or policy (claiming that some action should or should not be taken). Premise The attributes exclusive to premises are PremiseType and Strength. To understand Strength, recall that only premises can persuade readers, but also that an argument can be composed of a premise and a set of supporting/attacking premises. In an argument of this kind, Strength refers to how well the parent premise contributes to the persuasiveness independently of the contributions from its children. The scoring rubric for Strength is shown in Table 9. PremiseType takes on a discrete value from one of the following: real example, invented instance, analogy, testimony, statistics, definition, common knowledge, and warrant. Analogy, testimony, statistics, and definition are self-explanatory. A premise is labeled invented instance when it describes a hypothetical situation, and definition when it provides a definition to be used elsewhere in the argument. A premise has type warrant when it does not fit any other type, but serves a functional purpose to explain the relationship between two entities or clarify/quantify another statement. The real example premise type indicates that the statement is a historical event that actually occurred, or something that is verfiably true about the real world. 626 Score Description 6 A very strong premise. Not much can be improved in order to contribute better to the argument. 5 A strong premise. It contributes to the persuasiveness of the argument very well on its own. 4 A decent premise. It is a fairly strong point but lacking in one or more areas possibly affecting its perception by the audience. 3 A fairly weak premise. It is not a strong point and might only resonate with a minority of readers. 2 A totally weak statement. May only help to persuade a small number of readers. 1 The statement does not contribute at all. Table 9: Description of the Strength scores. Attribute Value MC C P Specificity 1 0 80 64 2 73 259 134 3 72 155 238 4 32 59 173 5 8 14 98 Logos Yes 181 304 No 4 263 Pathos Yes 67 59 No 118 508 Ethos Yes 16 9 No 169 558 Relevance 1 1 5 2 33 45 3 58 59 4 132 145 5 97 147 6 246 306 Evidence 1 3 246 614 2 62 115 28 3 57 85 12 4 33 80 26 5 16 35 15 6 14 6 12 Eloquence 1 3 23 24 2 19 106 97 3 116 320 383 4 42 102 154 5 5 16 49 ClaimType fact 368 value 145 policy 54 PremiseType real example 93 invented instance 53 analogy 2 testimony 4 statistics 15 definition 3 common know. 493 warrant 44 Persuasiveness 1 3 82 8 2 62 278 112 3 60 84 145 4 28 74 249 5 17 39 123 6 15 10 70 Table 10: Class/Score distributions by component type. 4.3 Annotation Procedure Our 102 essays were annotated by two native speakers of English. We first familiarized them with the rubrics and definitions and then trained Attribute MC C P Persuasiveness .739 .701 .552 Eloquence .590 .580 .557 Specificity .560 .530 .690 Evidence .755 .878 .928 Relevance .678 .555 Strength .549 Logos 1 .842 Pathos .654 .637 Ethos 1 1 ClaimType .589 PremiseType .553 Table 11: Krippendorff’s α agreement on each attribute by component type. them on five essays (not included in our corpus). After that, they were both asked to annotate a randomly selected set of 30 essays and discuss the resulting annotations to resolve any discrepancies. Finally, the remaining essays were partitioned into two sets, and each annotator received one set to annotate. The resulting distributions of scores/classes for persuasiveness and the attributes are shown in Table 10. 4.4 Inter-Annotator Agreement We use Krippendorff’s α to measure interannotator agreement. Results are shown in Table 11. As we can see, all attributes exhibit an agreement above 0.5, showing a correlation much more significant than random chance. Persuasiveness has an agreement of 0.688, which suggests that it can be agreed upon in a reasonably general sense. The MajorClaim components have the highest Persuasiveness agreement, and it declines as the type changes to Claim and then to Premise. This would indicate that persuasiveness is easier to articulate in a wholistic sense, but difficult to explain as the number of details involved in the explanation increases. The agreement scores that immediately stand out are the perfect 1.0’s for Logos and Ethos. The perfect Logos score is explained by the fact that every major claim was marked to use logos. Although ethos is far less common, both annotators 627 easily recognized it. This is largely due to the indisputability of recognizing a reference to an accepted authority on a given subject. Very few authors utilize this approach, so when they do it is extremely apparent. Contrary to Persuasiveness, Evidence agreement exhibits an upward trend as the component scope narrows. Even with this pattern, the Evidence agreement is always higher than Persuasiveness agreement, which suggests that it is not the only determiner of persuasiveness. In spite of a rubric defining how to score Eloquence, it remains one of the attributes with the lowest agreement. This indicates that it is difficult to agree on exact eloquence levels beyond basic English fluency. Additionally, Specificity produced unexpectedly low agreement in claims and major claims. Precisely quantifying how well a claim summarizes its argument turned out to be a complicated and subjective task. Relevance agreement for premises is one of the lowest, partly because there are multiple scores for high relevance, and no examples were given in the rubric. All attributes but those with the highest agreement are plagued by inherent subjectivity, regardless of how specific the rubric is written. There are often multiple interpretations of a given sentence, sometimes due to the complexity of natural language, and sometimes due to the poor writing of the author. Naturally, this makes it difficult to identify certain attributes such as Pathos, ClaimType, and PremiseType. Although great care was taken to make each attribute as independent of the others as possible, they are all related to each other to a minuscule degree (e.g., Eloquence and Specificity). While annotators generally agree on what makes a persuasive argument, the act of assigning blame to the persuasiveness (or lack thereof) is tainted by this overlapping of attributes. 4.5 Analysis of Annotations To understand whether the attributes we annotated are indeed useful for predicting persuasiveness, we compute the Pearson’s Correlation Coefficient (PC) between persuasiveness and each of the attributes along with the corresponding p-values. Results are shown in Table 12. Among the correlations that are statistically significant at the p < .05 level, we see, as expected, that Persuasiveness is positively correlated with Specificity, Evidence, Eloquence, and Strength. Neither is it surAttribute PC p-value Specificity .5680 0 Relevance −.0435 .163 Eloquence .4723 0 Evidence .2658 0 Strength .9456 0 Logos −.1618 0 Ethos −.0616 .1666 Pathos −.0835 .0605 ClaimType:fact .0901 .1072 ClaimType:value −.0858 .1251 ClaimType:policy −.0212 .7046 PremiseType:real example .2414 0 PremiseType:invented instance .0829 .0276 PremiseType:analogy .0300 .4261 PremiseType:testimony .0269 .4746 PremiseType:statistics .1515 0 PremiseType:definition .0278 .4608 PremiseType:common knowledge −.2948 1.228 PremiseType:warrant .0198 .6009 Table 12: Correlation of each attribute with Persuasiveness and the corresponding p-value. MC C P Avg PC .9688 .9400 .9494 .9495 ME .0710 .1486 .0954 .1061 Table 13: Persuasiveness scoring using gold attributes. prising that support provided by a premise in the form of statistics and examples is positively correlated with Persuasiveness. While Logos and invented instance also have significant correlations with Persuasiveness, the correlation is very weak. Next, we conduct an oracle experiment in an attempt to understand how well these attributes, when used together, can explain the persuasiveness of an argument. Specifically, we train three linear SVM regressors (using the SVMlight software (Joachims, 1999) with default learning parameters except for C (the regularization parameter), which is tuned on development data using grid search) to score an argument’s persuasiveness using the gold attributes as features. The three regressors are trained on arguments having MajorClaims, Claims, and Premises as parents. For instance, to train the regressor involving MajorClaims, each instance corresponds to an argument represented by all and only those attributes involved in the major claim and all of its children.4 Five-fold cross-validation results, which are 4There is a caveat. If we define features for each of the children, the number of features will be proportional to the number of children. However, SVMs cannot handle a variable number of features. Hence, all of the children will be represented by one set of features. For instance, the Specificity feature value of the children will be the Specificity values averaged over all of the children. 628 Prompt: Government budget focus, young children or university? Education plays a significant role in a country’s long-lasting prosperity. It is no wonder that governments throughout the world lay special emphasis on education development. As for the two integral components within the system, elementary and advanced education, there’s no doubt that a government is supposed to offer sufficient financial support for both. Concerning that elementary education is the fundamental requirement to be a qualified citizen in today’s society, government should guarantee that all people have equal and convenient access to it. So a lack of well-established primary education goes hand in hand with a high rate of illiteracy, and this interplay compromises a country’s future development. In other words, if countries, especially developing ones, are determined to take off, one of the key points governments should set on agenda is to educate more qualified future citizens through elementary education. . . . Table 14: An example essay. Owing to space limitations, only its first two paragraphs are shown. P E S Ev R St Lo Pa Et cType pType M1 government is supposed to offer sufficient financial support for both 3 4 2 3 T F F C1 if countries, especially developing ones, are determined to take off, one of the key points governments should set on agenda is to educate more qualified future citizens through elementary education 4 5 4 4 6 T F F policy P1 elementary education is the fundamental requirement to be a qualified citizen in today’s society 4 5 3 1 6 4 A C2 government should guarantee that all people have equal and convenient access to it 2 3 1 1 6 F F F policy P2 a lack of well-established primary education goes hand in hand with a high rate of illiteracy, and this interplay compromises a country’s future development 4 5 3 1 6 4 C Table 15: The argument components in the example in Table 14 and the scores of their associated attributes: Persuasiveness, Eloquence, Specificity, Evidence, Relevance, Strength, Logos, Pathos, Ethos, claimType, and premiseType. shown in Table 13, are expressed in terms of two evaluation metrics, PC and ME (the mean absolute distance between a system’s prediction and the gold score). Since PC is a correlation metric, higher correlation implies better performance. In contrast, ME is an error metric, so lower scores imply better performance. As we can see, the large PC values and the relatively low ME values provide suggestive evidence that these attributes, when used in combination, can largely explain the persuasiveness of an argument. What these results imply in practice is that models that are trained on these attributes for persuasiveness scoring could provide useful feedback to students on why their arguments are (un)persuasive. For instance, one can build a pipeline system for persuasiveness scoring as follows. Given an argument, this system first predicts its attributes and then scores its persuasiveness using the predicted attribute values computed in the first step. Since the persuasiveness score of an argument is computed using its predicted attributes, these attributes can explain the persuasiveness score. Hence, a student can figure out which aspect of persuasiveness needs improvements by examining the values of the predicted attributes. 4.6 Example To better understand our annotation scheme, we use the essay in Table 14 to illustrate how we obtain the attribute values in Table 15. In this essay, Claim C1, which supports MajorClaim M1, is supported by three children, Premises P1 and P2 as well as Claim C2. After reading the essay in its entirety and acquiring a holistic impression of the argument’s strengths and weaknesses, we begin annotating the atomic argument components bottom up, starting with the leaf nodes of the argument tree. First, we consider P2. Its Evidence score is 1 because it is a leaf node with no supporting evidence. Its Eloquence score is 5 because the sentence has no serious grammatical or syntactic errors, has a flowing, well thought out sentence structure, and uses articulate vocabulary. Its Specificity score is 3 because it is essentially saying that poor primary education causes illiteracy and consequently inhibits a country’s development. It does not state why or to what extent, so we cannot assign a score of 4. However, it does explain a simple relationship with little ambiguity due to the lack of hedge words, so 629 we can assign a score of 3. Its PremiseType is common knowledge because it is reasonable to assume most people would agree that poor primary education causes illiteracy, and also that illiteracy inhibits a country’s development. Its Relevance score is 6: its relationship with its parent is clear because the two components exhibit coreference. Specifically, P2 contains a reference to primary/elementary education and shows how this affects a country’s inability to transition from developing to developed. Its Strength is 4: though eloquent and relevant, P2 is lacking substance in order to be considered for a score of 5 or 6. The PremiseType is common knowledge, which is mediocre compared to statistics and real example. In order for a premise that is not grounded in the real world to be strong, it must be very specific. P2 only scored a 3 in Specificity, so we assign a Strength score of 4. Finally, the argument headed by P2, which does not have any children, has a Persuasiveness score of 4, which is obtained by summarizing the inherent strength of the premise and the supporting evidence. Although there is no supporting evidence for this premise, this does not adversely affect persuasiveness due to the standalone nature of premises. In this case the persuasiveness is derived totally from the strength. Next, the annotator would score C2 and P1, but for demonstration purposes we will examine the scoring of C1. C1’s Eloquence score is 5 because it shows fluency, broad vocabulary, and attention to how well the sentence structure reads. Its ClaimType is policy because it specifically says that the government should put something on their agenda. Its Specificity score is 4: while it contains information relevant to all the child premises (i.e., creating qualified citizens, whose role it is to provide the education, and the effect of education on a country’s development), it does not contain a qualifier stating the extent to which the assertion holds true. Its Evidence score is 4: C1 has two premises with decent persuasiveness scores and one claim with a poor persuasiveness score, and there are no attacking premises, so intuitively, we may say that this is a midpoint between many low quality premises and few high quality premises. We mark Logos as true, Pathos as false, and Ethos as false: rather than use an emotional appeal or an appeal to authority of any sort, the author attempts to use logical reasoning in order to prove their point. Its Persuasiveness score is 4: this score is mainly determined by the strength of the supporting evidence, given that the assertion is precise and clear as determined by the specificity and eloquence. Its Relevance score is 6, as anyone can see how endorsement of elementary education in C1 relates to the endorsement of elementary and university education in its parent (i.e., M1). After all of the claims have been annotated in the bottom-up method, the annotator moves on to the major claim, M1. M1’s Eloquence score is 4: while it shows fluency and a large vocabulary, it is terse and does not convey the idea exceptionally well. Its persuasion strategies are obtained by simply taking the logical disjunction of those used in its child claims. Since every claim in this essay relied on logos and did not employ pathos nor ethos, M1 is marked with Logos as true, Pathos as false, and Ethos as false. Its Evidence score is 3: in this essay there are two other supporting claims not in the excerpt, with persuasiveness scores of only 3 and 2, so M1’s evidence has one decently persuasive claim, one claim that is poor but understandable, and one claim that is so poor as to be completely unpersuasive (in this case it has no supporting premises). Its Specificity score is 2 because it does not have a quantifier nor does it attempt to summarize the main points of the evidence. Finally, its Persuasiveness score is 3: all supporting claims rely on logos, so there is no added persuasiveness from a variety of persuasion strategies, and since the eloquence and specificity are adequate, they do not detract from the Evidence score. 5 Conclusion We presented the first corpus of 102 persuasive student essays that are simultaneously annotated with argument trees, persuasiveness scores, and attributes of argument components that impact these scores. We believe that this corpus will push the frontiers of research in content-based essay grading by triggering the development of novel computational models concerning argument persuasiveness that could provide useful feedback to students on why their arguments are (un)persuasive in addition to how persuasive they are. Acknowledgments We thank the three anonymous reviewers for their detailed and insightful comments on an earlier draft of the paper. This work was supported in part by NSF Grants IIS-1219142 and IIS-1528037. 630 References Khalid Al Khatib, Henning Wachsmuth, Johannes Kiesel, Matthias Hagen, and Benno Stein. 2016. A news editorial corpus for mining argumentation strategies. In Proceedings of COLING 2016, the 26th International Conference on Computational Linguistics: Technical Papers, pages 3433–3443. Frans H. van Eemeren, Bart Garssen, Erik C. W. Krabbe, Francisca A. Snoeck Henkemans, Bart Verheij, and Jean H. M. Wagemans. 2014. In Handbook of Argumentation Theory. Springer, Dordrecht. Steffen Eger, Johannes Daxenberger, and Iryna Gurevych. 2017. Neural end-to-end learning for computational argumentation mining. In Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 11–22. Ivan Habernal and Iryna Gurevych. 2016a. What makes a convincing argument? Empirical analysis and detecting attributes of convincingness in Web argumentation. In Proceedings of the 2016 Conference on Empirical Methods in Natural Language Processing, pages 1214–1223. Ivan Habernal and Iryna Gurevych. 2016b. Which argument is more convincing? Analyzing and predicting convincingness of Web arguments using bidirectional LSTM. In Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 1589– 1599. Christopher Hidey, Elena Musi, Alyssa Hwang, Smaranda Muresan, and Kathy McKeown. 2017. Analyzing the semantic types of claims and premises in an online persuasive forum. In Proceedings of the 4th Workshop on Argument Mining, pages 11–21. Colin Higgins and Robyn Walker. 2012. Ethos, logos, pathos: Strategies of persuasion in social/environmental reports. Accounting Forum, 36:194-208. Derrick Higgins, Jill Burstein, Daniel Marcu, and Claudia Gentile. 2004. Evaluating multiple aspects of coherence in student essays. In Human Language Technologies: The 2004 Annual Conference of the North American Chapter of the Association for Computational Linguistics, pages 185–192. T. Joachims. 1999. Making large-scale SVM learning practical. In B. Sch¨olkopf, C. Burges, and A. Smola, editors, Advances in Kernel Methods Support Vector Learning, chapter 11, pages 169– 184. MIT Press, Cambridge, MA. Klaus Krippendorff. 1980. Content Analysis: An Introduction to Its Methodology. Sage commtext series. Sage, Thousand Oaks, CA. Stephanie Lukin, Pranav Anand, Marilyn Walker, and Steve Whittaker. 2017. Argument strength is in the eye of the beholder: Audience effects in persuasion. In Proceedings of the 15th Conference of the European Chapter of the Association for Computational Linguistics: Volume 1, Long Papers, pages 742–753. Eleni Miltsakaki and Karen Kukich. 2004. Evaluation of text coherence for electronic essay scoring systems. Natural Language Engineering, 10(1):25–55. Isaac Persing, Alan Davis, and Vincent Ng. 2010. Modeling organization in student essays. In Proceedings of the 2010 Conference on Empirical Methods in Natural Language Processing, pages 229– 239. Isaac Persing and Vincent Ng. 2013. Modeling thesis clarity in student essays. In Proceedings of the 51st Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 260–269. Isaac Persing and Vincent Ng. 2014. Modeling prompt adherence in student essays. In Proceedings of the 52nd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 1534–1543. Isaac Persing and Vincent Ng. 2015. Modeling argument strength in student essays. In Proceedings of the 53rd Annual Meeting of the Association for Computational Linguistics and the 7th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), pages 543–552. Isaac Persing and Vincent Ng. 2017. Why can’t you convince me? Modeling weaknesses in unpersuasive arguments. In Proceedings of the 26th International Joint Conference on Artificial Intelligence, pages 4082–4088. Mark D. Shermis and Jill Burstein. 2013. Handbook of Automated Essay Evaluation: Current Applications and New Directions. Routledge Chapman & Hall. Christian Stab and Iryna Gurevych. 2014a. Annotating argument components and relations in persuasive essays. In Proceedings of the 25th International Conference on Computational Linguistics: Technical Papers, pages 1501–1510. Christian Stab and Iryna Gurevych. 2014b. Identifying argumentative discourse structures in persuasive essays. In Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing, pages 46–56. Christian Stab and Iryna Gurevych. 2017a. Parsing argumentation structures in persuasive essays. Computational Linguistics, 43(3):619–659. Christian Stab and Iryna Gurevych. 2017b. Recognizing insufficiently supported arguments in argumentative essays. In Proceedings of the 15th Conference of the European Chapter of the Association for 631 Computational Linguistics: Volume 1, Long Papers, pages 980–990. Henning Wachsmuth, Nona Naderi, Yufang Hou, Yonatan Bilu, Vinodkumar Prabhakaran, Tim Alberdingk Thijm, Graeme Hirst, and Benno Stein. 2017. Computational argumentation quality assessment in natural language. In Proceedings of the 15th Conference of the European Chapter of the Association for Computational Linguistics: Volume 1, Long Papers, pages 176–187. Zhongyu Wei, Yang Liu, and Yi Li. 2016. Is this post persuasive? Ranking argumentative comments in online forum. In Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics (Volume 2: Short Papers), pages 195–200.
2018
58
Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Long Papers), pages 632–642 Melbourne, Australia, July 15 - 20, 2018. c⃝2018 Association for Computational Linguistics 632 Inherent Biases in Reference-based Evaluation for Grammatical Error Correction and Text Simplification Leshem Choshen1 and Omri Abend2 1School of Computer Science and Engineering, 2 Department of Cognitive Sciences The Hebrew University of Jerusalem [email protected], [email protected] Abstract The prevalent use of too few references for evaluating text-to-text generation is known to bias estimates of their quality (henceforth, low coverage bias or LCB). This paper shows that overcoming LCB in Grammatical Error Correction (GEC) evaluation cannot be attained by re-scaling or by increasing the number of references in any feasible range, contrary to previous suggestions. This is due to the long-tailed distribution of valid corrections for a sentence. Concretely, we show that LCB incentivizes GEC systems to avoid correcting even when they can generate a valid correction. Consequently, existing systems obtain comparable or superior performance compared to humans, by making few but targeted changes to the input. Similar effects on Text Simplification further support our claims. 1 Introduction Evaluation in monolingual translation (Xu et al., 2015; Mani, 2009) and in particular in GEC (Tetreault and Chodorow, 2008; Madnani et al., 2011; Felice and Briscoe, 2015; Bryant and Ng, 2015; Napoles et al., 2015) has gained notoriety for its difficulty, due in part to the heterogeneity and size of the space of valid corrections (Chodorow et al., 2012; Dreyer and Marcu, 2012). Reference-based evaluation measures (RBM) are the common practice in GEC, including the standard M2 (Dahlmeier and Ng, 2012), GLEU (Napoles et al., 2015) and I-measure (Felice and Briscoe, 2015). The Low Coverage Bias (LCB) was previously discussed by Bryant and Ng (2015), who showed that inter-annotator agreement in producing references is low, and concluded that RBMs underestimate the performance of GEC systems. To address this, they proposed a new measure, Ratio Scoring, which re-scales M2 by the interannotator agreement (i.e., the score of a human corrector), interpreted as an upper bound. We claim that the LCB has more far-reaching implications than previously discussed. First, while we agree with Bryant and Ng (2015) that a human correction should receive a perfect score, we show that LCB does not merely scale system performance by a constant factor, but rather that some correction policies are less prone to be biased against. Concretely, we show that by only correcting closed class errors, where few possible corrections are valid, systems can outperform humans. Indeed, in Section 2.3 we show that some existing systems outperform humans on M2 and GLEU, while only applying few changes to the source. We thus argue that the development of GEC systems against low coverage RBMs disincentivizes systems from making changes to the source in cases where there are plentiful valid corrections (open class errors), as necessarily only some of them are covered by the reference set. To support our claim we show that (1) existing GEC systems under-correct, often performing an order of magnitude less corrections than a human does (§3.2); (2) increasing the number of references alleviates under-correction (§3.3); and (3) under-correction is more pronounced in error types that are more varied in their valid corrections (§3.4). A different approach for addressing LCB was taken by (Bryant and Ng, 2015; Sakaguchi et al., 2016), who propose to increase the number of references (henceforth, M). In Section 2 we estimate the distribution of corrections per sentence, and find that increasing M is unlikely to overcome LCB, due to the vast number of valid corrections 633 for a sentence and their long-tailed distribution. Indeed, even short sentences have over 1000 valid corrections on average. Empirically assessing the effect of increasing M on the bias, we find diminishing returns using three standard GEC measures (M2, accuracy and GLEU), underscoring the difficulty in this approach. Similar trends are found when conducting such experiments to Text Simplification (TS) (§4). Specifically we show that (1) the distribution of valid simplifications for a given sentence is longtailed; (2) common measures for TS dramatically under-estimate performance; (3) additional references alleviate this under-prediction. To recap, we find that the LCB hinders the reliability of RBMs for GEC, and incentivizes systems developed to optimize these measures not to correct. LCB cannot be overcome by re-scaling or increasing M in any feasible range. 2 Coverage in RBMs We begin by formulating a methodology for studying the distribution of valid corrections for a sentence (§2.1), and then turn to assessing the effect inadequate coverage has on common RBMs (§2.2). Finally, we compare human and system scores by common RBMs (§2.3). Notation. We assume each ungrammatical sentence x has a set of valid corrections Correctx, and a discrete distribution Dx over them, where PDx(y) for y ∈Correctx is the probability a human annotator would correct x as y. Let X = x1 . . . xN be the evaluated set of source sentences and denote Di := Dxi. Each xi is independently sampled from some distribution L over input sentences, and is paired with M corrections Yi =  y1 i , . . . , yM i , which are independently sampled from Di. Our analysis assumes a fixed number of references across sentences, but generalizing to sentence-dependent M is straightforward. The coverage of a reference set Yi of size M for a sentence xi is defined as Py∼Di(y ∈Yi). A system C is a function from input sentences to proposed corrections (strings). An evaluation measure is a function f : X × Y × C →R. We use the term “true measure” to refer to a measure’s output where the reference set includes all valid corrections, i.e., ∀i: Yi = Correcti. Experimental Setup. We conduct all experiments on the NUCLE test dataset (Dahlmeier et al., 2013). NUCLE is a parallel corpus of essays written by language learners and their corrected versions, containing 1414 essays and 50 test essays, each of about 500 words. We evaluate all participating systems in the CoNLL 2014 shared task, in addition to three of the best performing systems on this dataset, a hybrid system (Rozovskaya and Roth, 2016), a phrase-based MT system (Junczys-Dowmunt and Grundkiewicz, 2016) and a neural network system (Xie et al., 2016). Appendix A lists system names and abbreviations. 2.1 Estimating the Corrections Distribution Data. We turn to estimating the number of corrections per sentence, and their histogram. The experiments in the following section are run on a random sample of 52 short sentences from the NUCLE test data, i.e. with 15 words or less. Through the length restriction, we avoid introducing too many independent errors that may drastically increase the number of annotation variants (as every combination of corrections for these errors is possible), thus resulting in unreliable estimation for Dx. Proven effective in GEC and related tasks such as MT (Zaidan and Callison-Burch, 2011; Madnani et al., 2011; Post et al., 2012), we use crowdsourcing to sample from Dx (see Appendix B). Aiming to judge grammaticality rather than fluency, we instructed the workers to correct only when necessary, not for styling. We begin by estimating the histogram of Dx for each sentence, using the crowdsourced corrections. We use UNSEENEST (Zou et al., 2016), a non-parametric algorithm to estimate a discrete distribution in which the individual values do not matter, only their probability. UNSEENEST aims to minimize the “earthmover distance”, between the estimated histogram and the histogram of the distribution. Intuitively, if histograms are piles of dirt, UNSEENEST minimizes the amount of dirt moved times the distance it moved. UNSEENEST was originally developed and tested for estimating the histogram of variants a gene may have, including undiscovered ones, a setting similar to ours. Our manual tests of UNSEENEST with small artificially created datasets showed satisfactory results.1 1An implementation of UNSEENEST, the data we collected, the estimated distributions and efficient implementations of computations with Poisson binomial distributions can be found in https://github.com/borgr/IBGEC. 634 Our estimates show that most input sentences have a large number of infrequent corrections that account for much of the probability mass and a rather small number of frequent corrections. Table 1 presents the mean number of different corrections with frequency at least γ (for different γs), and their total probability mass. For instance, 74.34 corrections account for 75% of the probability mass, each occurring with frequency ≥0.1%. Frequency Threshold (γ) 0 0.001 0.01 0.1 Variants 1351.24 74.34 8.72 1.35 Mass 1 0.75 0.58 0.37 Table 1: Estimating the distribution of corrections Dx. The table presents the mean number of corrections per sentence with probability more than γ (top row), as well as their total probability mass (bottom row). The high number of rare corrections raises the question of whether these can be regarded as noise. To test this we conducted another crowdsourcing experiment, where 3 annotators were asked to judge whether a correction produced in the first experiment, is indeed valid. We plot the validity of corrections against their frequencies, finding that frequency has little effect, where even the rarest corrections are judged valid 78% of the time. Details in Appendix C. 2.2 Under-estimation as a Function of M After estimating the histogram of valid corrections for a sentence, we turn to estimating the resulting bias (LCB), for different M values. We study sentence-level accuracy, F-Score and GLEU. Sentence-level Accuracy. Sentence-level accuracy is the percentage of corrections that exactly match one of the references. Accuracy is a basic, interpretable measure, used in GEC by, e.g., Rozovskaya and Roth (2010). It is also closely related to the 0-1 loss function commonly used for training in GEC (Chodorow et al., 2012; Rozovskaya and Roth, 2013). Formally, given test sentences X = {x1, . . . , xN}, their references Y1, . . . , YN and a system C, we define C’s accuracy to be Acc (C; X, Y ) = 1 N N X i=1 1C(xi)∈Yi. (1) Note that C’s accuracy is, in fact, an estimate of C’s true accuracy, the probability to produce a valid correction for a sentence. Formally: TrueAcc (C) = Px∼L (C (x) ∈Correctx) . (2) The bias of Acc (C; X, Y ) for a sample of N sentences, each paired with M references is then TrueAcc (C) −EX,Y [Acc (C; X, Y )] = (3) TrueAcc (C) −P (C (x) ∈Y ) = (4) P (C (x) ∈Correctx) · (5) (1 −P (C (x) ∈Y |C (x) ∈Correctx)) (6) We observe that the bias, denoted bM, is not affected by N, only by M. As M grows, Y better approximates Correctx, and bM tends to 0. In order to abstract away from the idiosyncrasies of specific systems, we consider an idealized learner, which, when correct, produces a valid correction with the same distribution as a human annotator (i.e., according to Dx). Formally, we assume that, if C(x) ∈Correctx then C(x) ∼Dx. Hence the bias bM (Eq. 6) can be re-written as P(C(x) ∈Correctx) · (1 −PY ∼DM x ,y∼Dx(y ∈Y )). We will henceforth assume that C is perfect (i.e., its true accuracy is 1). Note that assuming any other value for C’s true accuracy would simply scale bM by that accuracy. Similarly, assuming only a fraction p of the sentences require correction scales bM by p. We estimate bM empirically using its empirical mean on our experimental corpus: ˆbM = 1 −1 N N X i=1 PY ∼DM i ,y∼Di (y ∈Y ) . Using the UNSEENEST estimations of Di, we can compute ˆbM for any size of Yi (M). However, as this is highly computationally demanding, we estimate it using sampling. Specifically, for every M = 1, ..., 20 and xi, we sample Yi 1000 times (with replacement), and estimate P (y ∈Yi) as the covered probability mass PDi{y : y ∈Yi}. Based on that we compute the accuracy distribution and expectation (see Appendix D). We repeated all our experiments where Yi is sampled without replacement, and find similar trends with a faster increase in accuracy reaching over 0.47 with M = 10. Figure 1a presents the expected accuracy of a perfect system (i.e., 1-ˆbM) for different Ms. Results show that even for M values which are much larger than the standard (e.g., M = 20), expected 635 (a) Accuracy and Exact Index Match. (b) F0.5 and GLEU (c) (lucky) perfect SARI and MAXSARI Figure 1: The score obtained by perfect systems according to GEC accuracy (1a), GEC F-score and GLEU (1b). Figure 1c reports TS experimental results, namely the score of a perfect and lucky perfect system using SARI, and a perfect system using MAX-SARI. The y-axis corresponds to the measure values, and the x-axis to the number of references M. For bootstrapping experiments points are paired with a confidence interval (p = .95). accuracy is only around 0.5. As M increases, the contribution of each additional correction diminishes sharply (the slope is 0.004 for M = 20). We also experiment with a more relaxed measure, Exact Index Match, which is only sensitive to the identity of the changed words and not to what they were changed to. Formally, two corrections c and c′ over a source sentence x match if for their word alignments with the source (computed as above) a : {1, ..., |x|} →{1, ..., |c| , Null} and a′ : {1, ..., |x|} →{1, ..., |c′| , Null}, it holds that ca(i) ̸= xi ⇔c′ a′(i) ̸= xi, where cNull = c′ Null. Results, while somewhat higher, are still only 0.54 with M = 10. (Figure 1a) F-Score. While accuracy is commonly used as a loss function for training GEC systems, Fα-score is standard for evaluating system performance. The score is computed in terms of edit overlap between edits that constitute a correction and ones that constitute a reference, where edits are substring replacements to the source. We use the standard M2 scorer (Dahlmeier and Ng, 2012), which defines edits optimistically, maximizing over all possible annotations that generate the correction from the source. Since our crowdsourced corrections are not annotated for edits, we produce edits to the reference heuristically. The complexity of the measure prohibits an analytic approach (Yeh, 2000). We instead use bootstrapping to estimate the bias incurred by not being able to exhaustively enumerate the set of valid corrections. As with accuracy, in order to avoid confounding our results with system-specific biases, we assume the evaluated system is perfect and sample its corrections from the human distribution of corrections Dx. Concretely, given a value for M and for N, we uniformly sample from our experimental corpus source sentences x1, ..., xN, and M corrections for each Y1, ..., YN (with replacement). Setting a realistic value for N in our experiments is important for obtaining comparable results to those obtained on the NUCLE corpus (see §2.3), as the expected value of F-score depends on N and the number of sentences that do not need correction (Ncor). Following the statistics of NUCLE’s test set, we set N = 1312 and Ncor = 136. Bootstrapping is carried out by the accelerated bootstrap procedure (Efron, 1987), with 1000 iterations. We also report confidence intervals (p = .95), computed using the same procedure. Results (Figure 1b) again show the insufficiency of commonly-used M values for reliably estimating system performance. For instance, the F0.5score for our perfect system is only 0.42 with M = 2. The saturation effect, observed for accuracy, is even more pronounced in this setting. GLEU. We repeat the procedure using the mean GLEU sentence score (Figure 1b), which was shown to better correlate with human judgments than M2 (Napoles et al., 2016). Results are about 2% higher than M2’s with a similar saturation effect. Sakaguchi et al. (2016) observed a similar effect when evaluating against fluency-oriented references; this has led them to assume that saturation is due to covering most of the probability mass, which we now show is not the case.2 2 We do not experiment with I-measure (Felice and Briscoe, 2015), as its run-time is prohibitively high for experimenting with bootstrapping that requires many applications of the measure (Choshen and Abend, 2018a), and as empirical validation studies showed that it has a low correlation with human judgments (Sakaguchi et al., 2016). 636 Figure 2: F0.5 values with M = 2 for different systems, including confidence interval (p = .95). The left-most column (“source”) presents the F-score of a system that doesn’t make any changes to the source sentences. In red is human performance. See §2 for a legend of the systems. 2.3 Human and System Performance The bootstrapping method for computing the significance of the F-score (§2.2) can also be used for assessing the significance of the differences in system performance reported in the literature. We compute confidence intervals of different systems on the NUCLE test data (M = 2). Results (Figure 2) present mixed trends: some differences between previously reported F-scores are indeed significant and some are not. For example, the best performing system is significantly better than all but the second one. Considering the F-score of the best-performing systems, and comparing them to the F-score of a perfect system with M = 2 (in accordance with systems’ reported results), we find that their scores are comparable, where the systems RoRo and JMGR surpass a perfect system’s F-score. Similar experiments with GLEU show that the two systems obtain comparable or superior performance to humans on this measure as well. 2.4 Discussion In this section we have established that (1) as systems can surpass human performance on RBMs, re-scaling cannot be used to overcome the LCB, and that (2) as the distribution of valid corrections is long-tailed, the number of references needed for reliable RBMs is exceedingly high. Indeed, an average sentence has hundreds or more valid low-probability corrections, whose total probability mass is substantial. Our analysis with Exact Index Match suggests that similar effects are applicable to Grammatical Error Detection as well. The proposal of Sakaguchi et al. (2016), to emphasize fluency over grammaticality in reference corrections, only compounds this problem, as it results in a larger number of valid corrections. 3 Implications of the LCB We discuss the adverse effects of LCB not only on the reliability of RBMs, but on the development of GEC systems. We argue that evaluation with inadequate reference coverage incentivizes systems to under-correct, and to mostly target errors that have few valid corrections (closed-class). We first show that low coverage can lead to under-correction (§3.1), then show that modern systems make far fewer corrections to the source, compared to humans (§3.2). §3.3 shows that increasing the number of references can alleviate this effect. §3.4 shows that open-class errors are more likely to be under-corrected than closed-class ones. 3.1 Motivating Analysis For simplicity, we abstract away from the details of the learning model and assume that systems attempt to maximize an objective function, over some training or development data. We assume maximization is achieved by iterating over the samples, as with the Perceptron or SGD. Assume the system is faced with a phrase it predicts to be ungrammatical. Assume pdetect is the probability this prediction is correct, and pcorrect is the probability it is able to predict a valid correction for this phrase (including correctly identifying it as erroneous). Finally, assume evaluation is against M references with coverage pcoverage (the probability that a valid correction will be found among M randomly sampled references). We will now assume that the system may either choose to correct with the correction it finds the most likely or not at all. If it chooses not to correct, its probability of being rewarded (i.e., its output is in the reference set) is (1 −pdetect). Otherwise, its probability of being rewarded is pcorrect · pcoverage. A system is disincentivized from altering the phrase in cases where: pcorrect · pcoverage < 1 −pdetect (7) We expect Condition (7) to frequently hold in cases that require non-trivial changes, which are characterized both by low pcoverage (as non-trivial changes are often open-class), and by lower system performance. 637 Corrector Sentence Source This is especially to people who are overseas. CHAR, UMC, JMGR This is especially for people who are overseas. IPN This is especially to peoples who are overseas. CUUI This is especially to the people who are overseas. NUCLEA This is especially true for people who are overseas. NUCLEB This is especially relevant to people who are overseas. Table 2: Example for a sentence and proposed corrections by different systems (top part) and by the two NUCLE annotators (bottom part). Systems not mentioned in the table retain the source. No system produces a new word as needed. The two references differ in their corrections. Precision-oriented measures (e.g., F0.5) penalize invalidly correcting more harshly than not correcting an ungrammatical sentence. In these cases, Condition (7) should be written as pcorrect·pcoverage−(1 −pcorrect · pcoverage) α < 1−pdetect where α is the ratio between the penalty for introducing a wrong correction and the reward for a valid correction. The condition is even more likely to hold with such measures. 3.2 Under-correction in GEC Systems In this section we compare the prevalence of changes made to the source by the systems, to their prevalence in the NUCLE references. To strengthen our claim, we exclude all nonalphanumeric characters, both within tokens or as separate tokens. See Table 2 for an example. We consider three types of divergences between the source and the reference. First, we measure the extent to which words were changed: altered, deleted or added. To do so, we compute word alignment between the source and the reference, casting it as a weighted bipartite matching problem. Edge weights are assigned to be the token edit distances.3 Following word alignment, we define WORDCHANGE as the number of aligned words and unaligned words changed. Second, we quantify word order differences using Spearman’s ρ between the order of the words in the source sentence and the order of their corresponding-aligned words in the correction. ρ = 0 where the word 3Aligning words in GEC is much simpler than in MT, as most of the words are unchanged, deleted fully, added, or changed slightly. order is uncorrelated, and ρ = 1 where the orders exactly match. We report the average ρ over all source sentence pairs. Third, we report how many source sentences were split and how many concatenated by the reference and by the systems. One annotator was arbitrarily selected for the figures. Results. Results (Figure 3) show that humans make considerably more changes than systems according to all measures of under-correction, both in terms of the number of sentences modified and the number of modifications within them. Differences are often an order of magnitude large. For example, 36 reference sentences include 6 word changes, where the maximal number of sentences with 6 word changes by any system is 5. We find similar trends on the references of the TreeBank of Learner English (Yannakoudakis et al., 2011). 3.3 Higher M Alleviates Under-correction This section reports an experiment for determining whether increasing the number of references in training indeed reduces under-correction. There is no corpus available with multiple references which is large enough for re-training a system. Instead, we simulate such a setting with an oracle reranking approach, and test whether the availability of increasingly more training references reduces a system’s under-correction. Concretely, given a set of sentences, each paired with M references, a measure and a system’s kbest list, we define an oracle re-ranker that selects for each sentence the highest scoring correction. As a test case, we use the RoRo system with k = 100, and apply it to the largest available language learner corpus which is paired with a substantial amount of GEC references, namely the NUCLE test corpus. We use the standard Fscore as the evaluation measure, examining the under-correction of the oracle re-ranker for different M values, averaging over the 1312 samples of M references from the available set of ten references provided by Bryant and Ng (2015). As the argument is not trivial, we turn to explaining why decreased under-correction with an increase in M indicates that tuning against a small set of references (low coverage) yields undercorrection. Assume an input sentence with some sub-string e. There are three cases: (1) e is an error, (2) e is valid but there are valid references that alter it, (3) e is uniquely valid. In case (3) or638 Figure 3: The prevalence of changes in system outputs and in the NUCLE reference. The top figure presents the number of sentences (heat) for each amount of word changes (x-axis; measured by WORDCHANGE) done by the outputs and the reference (y-axis). The middle figure presents the percentage of sentence pairs (y-axis) where the Spearman ρ values do not exceed a certain threshold (x-axis). The bottom figure presents the counts of source sentences (y-axis) concatenated (right bars) or split (left bars) by the references (striped column) and the outputs (coloured columns). See Appendix A for a legend of the systems. Under all measures, the gold standard references make substantially more changes to the source sentences than any of the systems, in some cases an order of magnitude more. Lval empty Lval not empty e valid e error Small M 0 PY (e, Lval) PY (Lval) Large M 0 0 1 Correction Rate = ↓ ↑ Table 3: The expected effect of oracle re-ranking on undercorrection. Values represent the probability of altering a substring of the input e, which is a proxy to the expected correction rate. Lval is the valid alterations in the k-best list. PY (Lval) is the probability that a valid correction from the list is also in the reference set Y , PY (e, Lval) is the probability that, in addition, the reference that keeps e is not in Y . When M increases, the expected correction rate is expected to increase only if e is an error and a valid correction of it is found in the k-best list. Figure 4: The amount of sentences (y-axis) with a given number of words changed (x-axis) following oracle reranking with different M values (column colors), where the amount for M = 1 is subtracted from them. All references are randomly sampled except the “all” column that contains all ten references. In conclusion, tuning against additional references indeed reduces under-correction. acle re-ranking has no effect and can be ignored. The corrections in the k-best list can then be partitioned to those that keep e as it is; those that invalidly alter e; and those that validly alter e. Table 3 presents the probability that e will be altered in the different cases. Analysis shows that under-correction is likely to decrease with M only in the case where e is an error and the k-best list contains a valid correction of it. Whenever the reference allows both keeping e and altering e, the re-ranker selects keeping e. Indeed, our experimental results show that word changes increase with M (Figure 4), indicating that low coverage may play a role in the observed tendency of GEC systems to under-correct. No significant difference is found for word order. 639 3.4 Under-correction by Error Types In this section we study the prevalence of undercorrection according to edit types, finding that open-class types of errors (such as replacing a word with another word) are more starkly undercorrected, than closed-class errors. Evaluating with low coverage RBMs does not incentivize systems to address open-class errors (in fact, it disincentivizes them to). Therefore, even if LCB is not the cause for this trend, current evaluation procedures may perpetuate it. We use the data of Bryant et al. (2017), which automatically assigned types to each edit in the output of all CoNLL 2014 systems on the NUCLE test set. As a measure of under-correction tendency, we take the ratio between the mean number of corrections produced by the systems and by the references. We note that this analysis does not consider whether the predicted correction is valid or not, but only how many of the errors of each type the systems attempted to correct. We find that all edit types are under-predicted on average, but that the least under-predicted ones are mostly closed-class types. Concretely, the top quarter of error types consists of orthographical errors, plurality inflection of nouns, adjective inflections to superlative or comparative forms and determiner selection. The bottom quarter includes the categories verb selection, noun selection, particle/preposition selection, pronoun selection, and the type OTHER, which is a residual category. The only exception to this regularity is the closedclass punctuation selection type, which is found in the lower quarter. See Appendix E. This trend cannot be explained by assuming that common error types are targeted more. Indeed, error type frequency is slightly negatively correlated with the under-correction ratio (ρ=-0.29 pvalue=0.16). A more probable account of this effect is the disincentive of GEC systems to correct open-class error types, for which even valid corrections are unlikely to be rewarded. 4 Similar Effects on Simplification We now turn to replicating our experiments on Text Simplification (TS). From a formal point of view, evaluation of the tasks is similar: the output is obtained by making zero or more edits to the source. RBMs are the standard for TS evaluation, much like they are in GEC. Our experiments on TS demonstrate that similar trends recur in this setting as well. The tendency of TS systems to under-predict changes to the source has already been observed by previous work (Alva-Manchego et al., 2017), showing that TS systems under-predict word additions, deletions, substitutions, and sequence shifts (Zhang and Lapata, 2017), and have low edit distance from the source (Narayan and Gardent, 2016). Our experiments show that LCB may account for this under-prediction. Concretely, we show that (1) the distribution of valid references for a given sentence is long-tailed; (2) common evaluation measures suffer from LCB, taking SARI (Xu et al., 2016) as an example RBM (similar trends are obtained with Accuracy); (3) under-prediction is alleviated with M in oracle re-ranking experiments. We crowd-sourced 2500 reference simplifications for 47 sentences, using the corpus and the annotation protocol of Xu et al. (2016), and applying UNSEENEST to estimate Dx (Appendix B). Table 4 shows that the expected number of references is even greater in this setting. Assessing the effect of M on SARI, we find that SARI diverges from Accuracy and F-score in that its multi-reference version is not a maximum over the single-reference scores, but some combination of them. This can potentially increase coverage, but it also leads to an unintuitive situation: an output identical to a reference does not receive a perfect score, but rather the score depends on how similar the output is to the other references. A more in-depth analysis of SARI’s handling of multiple references is found in Appendix F. In order to neutralize this effect of SARI, we also report results with MAX-SARI, which coincides with SARI on M = 1, and is defined as the maximum single-reference SARI score for M > 1. Figure 1c presents the coverage of SARI and MAX-SARI of a perfect TS system that selects a random correction from the estimated distribution of corrections using the same bootstrapping protocol as in §2.1. We also include the SARI score of a “lucky perfect” system, that randomly selects one of the given references (the MAX-SARI score for such a system is 1). Results show that SARI has a coverage of about 0.45, and that this score is largely independent of M. The score of predicting one of the available references drops with the number of references, indicating that SARI scores may not be comparable across different M values. We therefore restrict oracle re-ranking experi640 Frequency Threshold (γ) 0 0.001 0.01 0.1 Variants 2636.29 111.19 4.68 0.13 Mass 1 0.42 0.14 0.02 Table 4: Estimating the distribution of simplifications Dx. The table presents the mean number of simplifications per sentence with probability more than γ (top row), as well as their total probability mass (bottom row). ments to MAX-SARI, conducting re-ranking experiments on k-best lists in two settings: Moses (Koehn et al., 2007) with k = 100, and a neural model (Nisioi et al., 2017) with k = 12. Our results indeed show that under-prediction is alleviated with M in both settings. For example, the least under-predicting model (the neural one) did not change 50 sentences with M = 1, but only 29 weren’t changed with M = 8. See Appendix G. 5 Conclusion We argue that using low-coverage reference sets has adverse effects on the reliability of referencebased evaluation, with GEC and TS as a test case, and consequently on the incentives offered to systems. We further argue that these effects cannot be overcome by re-scaling or increasing the number of references in a feasible way. The paper makes two methodological contributions to the monolingual translation evaluation literature: (1) a methodology for evaluating evaluation measures by the scores they assign a perfect system, using a bootstrapping procedure; (2) a methodology for assessing the distribution of valid monolingual translations. Our findings demonstrate how these tools can help characterize the biases of existing systems and evaluation measures. We believe our findings and methodologies can be useful for similar tasks such as style conversion and automatic post-editing of raw MT outputs. We note that the LCB further jeopardizes the reliability of common validation experiments for RBMs, that assess the correlation between human and measure rankings of system outputs (Grundkiewicz et al., 2015). Indeed, if outputs all similarly under-correct, correlation studies will not be affected by whether an RBM is sensitive to undercorrection. Therefore, the tendency of RBMs to reward under-correction cannot be detected by such correlation experiments (cf. Choshen and Abend, 2018a). Our results underscore the importance of developing alternative evaluation measures that transcend n-gram overlap, and use deeper analysis tools, e.g., by comparing the semantics of the reference and the source to the output (cf. Lo and Wu, 2011). Napoles et al. (2016) have made progress towards this goal in proposing a reference-less grammaticality measure, using Grammatical Error Detection tools, as did Asano et al. (2017), who added a fluency measure to the grammaticality. In a recent project (Choshen and Abend, 2018b), we proposed a complementary measure that measures the semantic faithfulness of the output to the source, in order to form a combined semantic measure that bypasses the pitfalls of low coverage. Acknowledgments This work was supported by the Israel Science Foundation (grant No. 929/17), and by the HUJI Cyber Security Research Center in conjunction with the Israel National Cyber Bureau in the Prime Minister’s Office. We thank Nathan Schneider, Courtney Napoles and Joel Tetreault for helpful feedback. References Fernando Alva-Manchego, Joachim Bingel, Gustavo Paetzold, Carolina Scarton, and Lucia Specia. 2017. Learning how to simplify from explicit labeling of complex-simplified text pairs. In Proceedings of the Eighth International Joint Conference on Natural Language Processing (Volume 1: Long Papers), pages 295–305, Taipei, Taiwan. Asian Federation of Natural Language Processing. Hiroki Asano, Tomoya Mizumoto, and Kentaro Inui. 2017. Reference-based metrics can be replaced with reference-less metrics in evaluating grammatical error correction systems. In Proceedings of the Eighth International Joint Conference on Natural Language Processing (Volume 2: Short Papers), volume 2, pages 343–348. Christopher Bryant, Mariano Felice, and Ted Briscoe. 2017. Automatic annotation and evaluation of error types for grammatical error correction. In Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 793–805, Vancouver, Canada. Association for Computational Linguistics. Christopher Bryant and Hwee Tou Ng. 2015. How far are we from fully automatic high quality grammatical error correction? In ACL (1), pages 697–707. Martin Chodorow, Markus Dickinson, Ross Israel, and Joel R Tetreault. 2012. Problems in evaluating grammatical error detection systems. In COLING, pages 611–628. Citeseer. 641 Leshem Choshen and Omri Abend. 2018a. Automatic metric validation for grammatical error correction. In Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers). Leshem Choshen and Omri Abend. 2018b. Referenceless measure of faithfulness for grammatical error correction. In Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies. Daniel Dahlmeier and Hwee Tou Ng. 2012. Better evaluation for grammatical error correction. In Proceedings of the 2012 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 568–572. Association for Computational Linguistics. Daniel Dahlmeier, Hwee Tou Ng, and Siew Mei Wu. 2013. Building a large annotated corpus of learner english: The nus corpus of learner english. In Proceedings of the Eighth Workshop on Innovative Use of NLP for Building Educational Applications, pages 22–31. Markus Dreyer and Daniel Marcu. 2012. Hyter: Meaning-equivalent semantics for translation evaluation. In Proceedings of the 2012 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 162–171. Association for Computational Linguistics. Bradley Efron. 1987. Better bootstrap confidence intervals. Journal of the American statistical Association, 82(397):171–185. Mariano Felice and Ted Briscoe. 2015. Towards a standard evaluation method for grammatical error detection and correction. In HLT-NAACL, pages 578– 587. Roman Grundkiewicz, Marcin Junczys-Dowmunt, Edward Gillian, et al. 2015. Human evaluation of grammatical error correction systems. In EMNLP, pages 461–470. Marcin Junczys-Dowmunt and Roman Grundkiewicz. 2016. Phrase-based machine translation is state-ofthe-art for automatic grammatical error correction. In Proceedings of the 2016 Conference on Empirical Methods in Natural Language Processing, pages 1546–1556, Austin, Texas. Association for Computational Linguistics. Philipp Koehn, Hieu Hoang, Alexandra Birch, Chris Callison-Burch, Marcello Federico, Nicola Bertoldi, Brooke Cowan, Wade Shen, Christine Moran, Richard Zens, Chris Dyer, Ondrej Bojar, Alexandra Constantin, and Evan Herbst. 2007. Moses: Open source toolkit for statistical machine translation. In Proceedings of the 45th Annual Meeting of the Association for Computational Linguistics Companion Volume Proceedings of the Demo and Poster Sessions, pages 177–180. Chi-kiu Lo and Dekai Wu. 2011. Meant: an inexpensive, high-accuracy, semi-automatic metric for evaluating translation utility via semantic frames. In Proceedings of the 49th Annual Meeting of the Association for Computational Linguistics: Human Language Technologies-Volume 1, pages 220–229. Association for Computational Linguistics. Nitin Madnani, Joel Tetreault, Martin Chodorow, and Alla Rozovskaya. 2011. They can help: Using crowdsourcing to improve the evaluation of grammatical error detection systems. In Proceedings of the 49th Annual Meeting of the Association for Computational Linguistics: Human Language Technologies: short papers-Volume 2, pages 508–513. Association for Computational Linguistics. Inderjeet Mani. 2009. Summarization evaluation: an overview. In Proceedings of the NTCIR Workshop, volume 2. Courtney Napoles, Keisuke Sakaguchi, Matt Post, and Joel Tetreault. 2015. Ground truth for grammatical error correction metrics. In Proceedings of the 53rd Annual Meeting of the Association for Computational Linguistics and the 7th International Joint Conference on Natural Language Processing, volume 2, pages 588–593. Courtney Napoles, Keisuke Sakaguchi, and Joel Tetreault. 2016. There’s no comparison: Referenceless evaluation metrics in grammatical error correction. In Proceedings of the 2016 Conference on Empirical Methods in Natural Language Processing, pages 2109–2115. Association for Computational Linguistics. Shashi Narayan and Claire Gardent. 2016. Unsupervised sentence simplification using deep semantics. In Proceedings of the 9th International Natural Language Generation conference, pages 111–120, Edinburgh, UK. Association for Computational Linguistics. Sergiu Nisioi, Sanja Štajner, Simone Paolo Ponzetto, and Liviu P Dinu. 2017. Exploring neural text simplification models. In Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (Volume 2: Short Papers), volume 2, pages 85–91. Matt Post, Chris Callison-Burch, and Miles Osborne. 2012. Constructing parallel corpora for six indian languages via crowdsourcing. In Proceedings of the Seventh Workshop on Statistical Machine Translation, pages 401–409. Association for Computational Linguistics. Alla Rozovskaya and Dan Roth. 2010. Annotating esl errors: Challenges and rewards. In Proceedings of the NAACL HLT 2010 fifth workshop on innovative use of NLP for building educational applications, 642 pages 28–36. Association for Computational Linguistics. Alla Rozovskaya and Dan Roth. 2013. Joint learning and inference for grammatical error correction. Urbana, 51:61801. Alla Rozovskaya and Dan Roth. 2016. Grammatical error correction: Machine translation and classifiers. In Proc. of ACL, pages 2205–2215. Keisuke Sakaguchi, Courtney Napoles, Matt Post, and Joel Tetreault. 2016. Reassessing the goals of grammatical error correction: Fluency instead of grammaticality. Transactions of the Association for Computational Linguistics, 4:169–182. Joel R Tetreault and Martin Chodorow. 2008. Native judgments of non-native usage: Experiments in preposition error detection. In Proceedings of the Workshop on Human Judgements in Computational Linguistics, pages 24–32. Association for Computational Linguistics. Ziang Xie, Anand Avati, Naveen Arivazhagan, Dan Jurafsky, and Andrew Y Ng. 2016. Neural language correction with character-based attention. arXiv preprint arXiv:1603.09727. Wei Xu, Chris Callison-Burch, and Courtney Napoles. 2015. Problems in current text simplification research: New data can help. Transactions of the Association for Computational Linguistics, 3:283–297. Wei Xu, Courtney Napoles, Ellie Pavlick, Quanze Chen, and Chris Callison-Burch. 2016. Optimizing statistical machine translation for text simplification. Transactions of the Association for Computational Linguistics, 4:401–415. Helen Yannakoudakis, Ted Briscoe, and Ben Medlock. 2011. A new dataset and method for automatically grading esol texts. In Proceedings of the 49th Annual Meeting of the Association for Computational Linguistics: Human Language Technologies-Volume 1, pages 180–189. Association for Computational Linguistics. Alexander Yeh. 2000. More accurate tests for the statistical significance of result differences. In Proceedings of the 18th conference on Computational linguistics-Volume 2, pages 947–953. Association for Computational Linguistics. Omar F Zaidan and Chris Callison-Burch. 2011. Crowdsourcing translation: Professional quality from non-professionals. In Proceedings of the 49th Annual Meeting of the Association for Computational Linguistics: Human Language TechnologiesVolume 1, pages 1220–1229. Association for Computational Linguistics. Xingxing Zhang and Mirella Lapata. 2017. Sentence simplification with deep reinforcement learning. In Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing, pages 584–594, Copenhagen, Denmark. Association for Computational Linguistics. James Zou, Gregory Valiant, Paul Valiant, Konrad Karczewski, Siu On Chan, Kaitlin Samocha, Monkol Lek, Shamil Sunyaev, Mark Daly, and Daniel G MacArthur. 2016. Quantifying unobserved proteincoding variants in human populations provides a roadmap for large-scale sequencing projects. Nature Communications, 7.
2018
59
Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Long Papers), pages 56–65 Melbourne, Australia, July 15 - 20, 2018. c⃝2018 Association for Computational Linguistics 56 Triangular Architecture for Rare Language Translation Shuo Ren1,2∗, Wenhu Chen3, Shujie Liu4, Mu Li4, Ming Zhou4 and Shuai Ma1,2 1SKLSDE Lab, Beihang University, Beijing, China 2Beijing Advanced Innovation Center for Big Data and Brain Computing, Beijing, China 3University of California, Santa Barbara, CA, USA 4Microsoft Research in Asia, Beijing, China Abstract Neural Machine Translation (NMT) performs poor on the low-resource language pair (X, Z), especially when Z is a rare language. By introducing another rich language Y , we propose a novel triangular training architecture (TA-NMT) to leverage bilingual data (Y, Z) (may be small) and (X, Y ) (can be rich) to improve the translation performance of lowresource pairs. In this triangular architecture, Z is taken as the intermediate latent variable, and translation models of Z are jointly optimized with a unified bidirectional EM algorithm under the goal of maximizing the translation likelihood of (X, Y ). Empirical results demonstrate that our method significantly improves the translation quality of rare languages on MultiUN and IWSLT2012 datasets, and achieves even better performance combining back-translation methods. 1 Introduction In recent years, Neural Machine Translation (NMT) (Kalchbrenner and Blunsom, 2013; Sutskever et al., 2014; Bahdanau et al., 2014) has achieved remarkable performance on many translation tasks (Jean et al., 2015; Sennrich et al., 2016; Wu et al., 2016; Sennrich et al., 2017). Being an end-to-end architecture, an NMT system first encodes the input sentence into a sequence of real vectors, based on which the decoder generates the target sequence word by word with the attention mechanism (Bahdanau et al., 2014; Luong et al., 2015). During training, NMT systems are optimized to maximize the translation probability of a given language pair ∗Contribution during internship at MSRA. with the Maximum Likelihood Estimation (MLE) method, which requires large bilingual data to fit the large parameter space. Without adequate data, which is common especially when it comes to a rare language, NMT usually falls short on low-resource language pairs (Zoph et al., 2016). In order to deal with the data sparsity problem for NMT, exploiting monolingual data (Sennrich et al., 2015; Zhang and Zong, 2016; Cheng et al., 2016; Zhang et al., 2018; He et al., 2016) is the most common method. With monolingual data, the back-translation method (Sennrich et al., 2015) generates pseudo bilingual sentences with a targetto-source translation model to train the source-totarget one. By extending back-translation, sourceto-target and target-to-source translation models can be jointly trained and boost each other (Cheng et al., 2016; Zhang et al., 2018). Similar to joint training (Cheng et al., 2016; Zhang et al., 2018), dual learning (He et al., 2016) designs a reinforcement learning framework to better capitalize on monolingual data and jointly train two models. Instead of leveraging monolingual data (X or Z) to enrich the low-resource bilingual pair (X, Z), in this paper, we are motivated to introduce another rich language Y , by which additionally acquired bilingual data (Y, Z) and (X, Y ) can be exploited to improve the translation performance of (X, Z). This requirement is easy to satisfy, especially when Z is a rare language but X is not. Under this scenario, (X, Y ) can be a rich-resource pair and provide much bilingual data, while (Y, Z) would also be a low-resource pair mostly because Z is rare. For example, in the dataset IWSLT2012, there are only 112.6K bilingual sentence pairs of English-Hebrew, since Hebrew is a rare language. If French is introduced as the third language, we can have another lowresource bilingual data of French-Hebrew (116.3K sentence pairs), and easily-acquired bilingual data 57 of the rich-resource pair English-French. Figure 1: Triangular architecture for rare language translation. The solid lines mean rich-resource and the dash lines mean low-resource. X, Y and Z are three different languages. With the introduced rich language Y , in this paper, we propose a novel triangular architecture (TA-NMT) to exploit the additional bilingual data of (Y, Z) and (X, Y ), in order to get better translation performance on the low-resource pair (X, Z), as shown in Figure 1. In this architecture, (Y, Z) is used for training another translation model to score the translation model of (X, Z), while (X, Y ) is used to provide large bilingual data with favorable alignment information. Under the motivation to exploit the richresource pair (X, Y ), instead of modeling X ⇒ Z directly, our method starts from modeling the translation task X ⇒Y while taking Z as a latent variable. Then, we decompose X ⇒Y into two phases for training two translation models of low-resource pairs ((X, Z) and (Y, Z)) respectively. The first translation model generates a sequence in the hidden space of Z from X, based on which the second one generates the translation in Y . These two models can be optimized jointly with an Expectation Maximization (EM) framework with the goal of maximizing the translation probability p(y|x). In this framework, the two models can boost each other by generating pseudo bilingual data for model training with the weights scored from the other. By reversing the translation direction of X ⇒Y , our method can be used to train another two translation models p(z|y) and p(x|z). Therefore, the four translation models (p(z|x), p(x|z), p(z|y) and p(y|z)) of the rare language Z can be optimized jointly with our proposed unified bidirectional EM algorithm. Experimental results on the MultiUN and IWSLT2012 datasets demonstrate that our method can achieve significant improvements for rare languages translation. By incorporating backtranslation (a method leveraging more monolingual data) into our method, TA-NMT can achieve even further improvements. Our contributions are listed as follows: • We propose a novel triangular training architecture (TA-NMT) to effectively tackle the data sparsity problem for rare languages in NMT with an EM framework. • Our method can exploit two additional bilingual datasets at both the model and data levels by introducing another rich language. • Our method is a unified bidirectional EM algorithm, in which four translation models on two low-resource pairs are trained jointly and boost each other. 2 Method As shown in Figure 1, our method tries to leverage (X, Y ) (a rich-resource pair) and (Y, Z) to improve the translation performance of low-resource pair (X, Z), during which translation models of (X, Z) and (Y, Z) can be improved jointly. Instead of directly modeling the translation probabilities of low-resource pairs, we model the rich-resource pair translation X ⇒Y , with the language Z acting as a bridge to connect X and Y . We decompose X ⇒Y into two phases for training two translation models. The first model p(z|x) generates the latent translation in Z from the input sentence in X, based on which the second one p(y|z) generate the final translation in language Y . Following the standard EM procedure (Borman, 2004) and Jensen’s inequality, we derive the lower bound of p(y|x) over the whole training data D as follows: L(Θ; D) = X (x,y)∈D log p(y|x) = X (x,y)∈D log X z p(z|x)p(y|z) = X (x,y)∈D log X z Q(z)p(z|x)p(y|z) Q(z) ≥ X (x,y)∈D X z Q(z) log p(z|x)p(y|z) Q(z) .= L(Q) (1) where Θ is the model parameters set of p(z|x) and p(y|z), and Q(z) is an arbitrary posterior distribution of z. We denote the lower-bound in the last 58 but one line as L(Q). Note that we use an approximation that p(y|x, z) ≈p(y|z) due to the semantic equivalence of parallel sentences x and y. In the following subsections, we will first propose our EM method in subsection 2.1 based on the lower-bound derived above. Next, we will extend our method to two directions and give our unified bidirectional EM training in subsection 2.2. Then, in subsection 2.3, we will discuss more training details of our method and present our algorithm in the form of pseudo codes. 2.1 EM Training To maximize L(Θ; D), the EM algorithm can be leveraged to maximize its lower bound L(Q). In the E-step, we calculate the expectation of the variable z using current estimate for the model, namely find the posterior distribution Q(z). In the M-step, with the expectation Q(z), we maximize the lower bound L(Q). Note that conditioned on the observed data and current model, the calculation of Q(z) is intractable, so we choose Q(z) = p(z|x) approximately. M-step: In the M-step, we maximize the lower bound L(Q) w.r.t model parameters given Q(z). By substituting Q(z) = p(z|x) into L(Q), we can get the M-step as follows: Θy|z = arg max Θy|z L(Q) = arg max Θy|z X (x,y)∈D X z p(z|x) log p(y|z) = arg max Θy|z X (x,y)∈D Ez∼p(z|x) log p(y|z) (2) E-step: The approximate choice of Q(z) brings in a gap between L(Q) and L(Θ; D), which can be minimized in the E-step with Generalized EM method (McLachlan and Krishnan, 2007). According to Bishop (2006), we can write this gap explicitly as follows: L(Θ; D) −L(Q) = X z Q(z) log Q(z) p(z|y) = KL(Q(z)||p(z|y)) = KL(p(z|x)||p(z|y)) (3) where KL(·) is the KullbackLeibler divergence, and the approximation that p(z|x, y) ≈p(z|y) is also used above. In the E-step, we minimize the gap between L(Q) and L(Θ; D) as follows: Θz|x = arg min Θz|x KL(p(z|x)||p(z|y)) (4) To sum it up, the E-step optimizes the model p(z|x) by minimizing the gap between L(Q) and L(Θ; D) to get a better lower bound L(Q). This lower bound is then maximized in the M-step to optimize the model p(y|z). Given the new model p(y|z), the E-step tries to optimize p(z|x) again to find a new lower bound, with which the M-step is re-performed. This iteration process continues until the models converge, which is guaranteed by the convergence of the EM algorithm. 2.2 Unified Bidirectional Training The model p(z|y) is used as an approximation of p(z|x, y) in the E-step optimization (Equation 3). Due to the low resource property of the language pair (Y, Z), p(z|y) cannot be well trained. To solve this problem, we can jointly optimize p(x|z) and p(z|y) similarly by maximizing the reverse translation probability p(x|y). We now give our unified bidirectional generalized EM procedures as follows: • Direction of X ⇒Y E: Optimize Θz|x. arg min Θz|x KL(p(z|x)||p(z|y)) (5) M: Optimize Θy|z. arg max Θy|z X (x,y)∈D Ez∼p(z|x) log p(y|z) (6) • Direction of Y ⇒X E: Optimize Θz|y. arg min Θz|y KL(p(z|y)||p(z|x)) (7) M: Optimize Θx|z. arg max Θx|z X (x,y)∈D Ez∼p(z|y) log p(x|z) (8) Based on the above derivation, the whole architecture of our method can be illustrated in Figure 2, where the dash arrows denote the direction of p(y|x), in which p(z|x) and p(y|z) are trained jointly with the help of p(z|y), while the solid ones denote the direction of p(x|y), in which p(z|y) and p(x|z) are trained jointly with the help of p(z|x). 59 Figure 2: Triangular Learning Architecture for Low-Resource NMT 2.3 Training Details A major difficulty in our unified bidirectional training is the exponential search space of the translation candidates, which could be addressed by either sampling (Shen et al., 2015; Cheng et al., 2016) or mode approximation (Kim and Rush, 2016). In our experiments, we leverage the sampling method and simply generate the top target sentence for approximation. In order to perform gradient descend training, the parameter gradients for Equations 5 and 7 are formulated as follows: ∇Θz|xKL(p(z|x)||p(z|y)) = Ez∼p(z|x) log p(z|x) p(z|y)∇Θz|x log p(z|x) ∇Θz|yKL(p(z|y)||p(z|x)) = Ez∼p(z|y) log p(z|y) p(z|x)∇Θz|y log p(z|y) (9) Similar to reinforcement learning, models p(z|x) and p(z|y) are trained using samples generated by the models themselves. According to our observation, some samples are noisy and detrimental to the training process. One way to tackle this is to filter out the bad ones using some additional metrics (BLEU, etc.). Nevertheless, in our settings, BLEU scores cannot be calculated during training due to the absence of the golden targets (z is generated based on x or y from the richresource pair (x, y)). Therefore we choose IBM model1 scores to weight the generated translation candidates, with the word translation probabilities calculated based on the given bilingual data (the low-resource pair (x, z) or (y, z)). Additionally, to stabilize the training process, the pseudo samples generated by model p(z|x) or p(z|y) are mixed with true bilingual samples in the same mini-batch with the ratio of 1-1. The whole training procedure is described in the following Algorithm 1, where the 5th and 9th steps are generating pseudo data. Algorithm 1 Training low-resource translation models with the triangular architecture Input: Rich-resource bilingual data (x, y); lowresource bilingual data (x, z) and (y, z) Output: Parameters Θz|x, Θy|z, Θz|y and Θx|z 1: Pre-train p(z|x), p(z|y), p(x|z), p(y|z) 2: while not convergence do 3: Sample (x, y), (x∗, z∗), (y∗, z∗) ∈D 4: ▷X ⇒Y : Optimize Θz|x and Θy|z 5: Generate z′ from p(z′|x) and build the training batches B1 = (x, z′)∪(x∗, z∗), B2 = (y, z′) ∪(y∗, z∗) 6: E-step: update Θz|x with B1 (Equation 5) 7: M-step: update Θy|z with B2 (Equation 6) 8: ▷Y ⇒X: Optimize Θz|y and Θx|z 9: Generate z′ from p(z′|y) and build the training batches B3 = (y, z′)∪(y∗, z∗), B4 = (x, z′) ∪(x∗, z∗) 10: E-step: update Θz|y with B3 (Equation 7) 11: M-step: update Θx|z with B4 (Equation 8) 12: end while 13: return Θz|x, Θy|z, Θz|y and Θx|z 3 Experiments 3.1 Datasets In order to verify our method, we conduct experiments on two multilingual datasets. The one is MultiUN (Eisele and Chen, 2010), which is a collection of translated documents from the United Nations, and the other is IWSLT2012 (Cettolo et al., 2012), which is a set of multilingual transcriptions of TED talks. As is mentioned in section 1, our method is compatible with methods exploiting monolingual data. So we also find some extra monolingual data of rare languages in both datasets and conduct experiments incorporating back-translation into our method. MultiUN: English-French (EN-FR) bilingual data are used as the rich-resource pair (X, Y ). Arabic (AR) and Spanish (ES) are used as two simulated rare languages Z. We randomly choose subsets of bilingual data of (X, Z) and (Y, Z) in the original dataset to simulate low-resource situations, and make sure there is no overlap in Z between chosen data of (X, Z) and (Y, Z). IWSLT20121: English-French is used as the rich-resource pair (X, Y ), and two rare languages Z are Hebrew (HE) and Romanian (RO) in our 1https://wit3.fbk.eu/mt.php?release=2012-02-plain 60 Pair MultiUN IWSLT2012 Lang Size Lang Size (X, Y ) EN-FR 9.9 M EN-FR 3 7.9 M (X, Z) EN-AR 116 K EN-HE 112.6 K (Y, Z) FR-AR 116 K FR-HE 116.3 K mono Z AR 3 M HE 512.5 K (X, Z) EN-ES 116 K EN-RO 4 467.3 K (Y, Z) FR-ES 116 K FR-RO 111.6 K mono Z ES 3 M RO 885.0 K Table 1: training data size of each language pair. choice. Note that in this dataset, low-resource pairs (X, Z) and (Y, Z) are severely overlapped in Z. In addition, English-French bilingual data from WMT2014 dataset are also used to enrich the rich-resource pair. We also use additional EnglishRomanian bilingual data from Europarlv7 dataset (Koehn, 2005). The monolingual data of Z (HE and RO) are taken from the web2. In both datasets, all sentences are filtered within the length of 5 to 50 after tokenization. Both the validation and the test sets are 2,000 parallel sentences sampled from the bilingual data, with the left as training data. The size of training data of all language pairs are shown in Table 1. 3.2 Baselines We compare our method with four baseline systems. The first baseline is the RNNSearch model (Bahdanau et al., 2014), which is a sequence-tosequence model with attention mechanism trained with given small-scale bilingual data. The trained translation models are also used as pre-trained models for our subsequent training processes. The second baseline is PBSMT (Koehn et al., 2003), which is a phrase-based statistical machine translation system. PBSMT is known to perform well on low-resource language pairs, so we want to compare it with our proposed method. And we use the public available implementation of Moses5 for training and test in our experiments. The third baseline is a teacher-student alike method (Chen et al., 2017). For the sake of brevity, we will denote it as T-S. The process is illustrated in Figure 3. We treat this method as a second baseline because it can also be regarded as a method exploiting (Y, Z) and (X, Y ) to improve 2https://github.com/ajinkyakulkarni14/TEDMultilingual-Parallel-Corpus 3together with WMT2014 4together with Europarlv7 5http://www.statmt.org/moses/ Method Resources PBSMT (X, Z), (Y, Z) RNNSearch (X, Z), (Y, Z) T-S (X, Z), (Y, Z), (X, Y ) BackTrans (X, Z), (Y, Z), (X, Y ), mono Z TA-NMT (X, Z), (Y, Z), (X, Y ) TA-NMT(GI) (X, Z), (Y, Z), (X, Y ), mono Z Table 2: Resources that different methods use the translation of (X, Z) if we regard (X, Z) as the zero-resource pair and p(x|y) as the teacher model when training p(z|x) and p(x|z). The fourth baseline is back-translation (Sennrich et al., 2015). We will denote it as BackTrans. More concretely, to train the model p(z|x), we use extra monolingual Z described in Table 1 to do back-translation; to train the model p(x|z), we use monolingual X taken from (X, Y ). Procedures for training p(z|y) and p(y|z) are similar. This method use extra monolingual data of Z compared with our TA-NMT method. But we can incorporate it into our method. Figure 3: A teacher-student alike method for low-resource translation. For training p(z|x) and p(x|z), we mix the true pair (y∗, z∗) ∈D with the pseudo pair (x′, z∗) generated by teacher model p (x′|y∗) in the same mini-batch. The training procedure of p(z|y) and p(y|z) is similar. 3.3 Overall Results Experimental results on both datasets are shown in Table 3 and 4 respectively, in which RNNSearch, PBSMT, T-S and BackTrans are four baselines. TA-NMT is our proposed method, and TA-NMT(GI) is our method incorporating backtranslation as good initialization. For the purpose of clarity and a fair comparison, we list the resources that different methods exploit in Table 2. From Table 3 on MultiUN, the performance of RNNSearch is relatively poor. As is expected, PBSMT performs better than RNNSearch on lowresource pairs by the average of 1.78 BLEU. The T-S method which can doubling the training data 61 Method EN2AR AR2EN FR2AR AR2FR Ave EN2ES ES2EN FR2ES ES2FR Ave (X⇒Z) (Z⇒X) (Y⇒Z) (Z⇒Y) (X⇒Z) (Z⇒X) (Y⇒Z) (Z⇒Y) RNNSearch 18.03 31.40 13.42 22.04 21.22 38.77 36.51 32.92 33.05 35.31 PBSMT 19.44 30.81 15.27 23.65 22.29 38.47 36.64 34.99 33.98 36.02 T-S 19.02 32.47 14.59 23.53 22.40 39.75 38.02 33.67 34.04 36.57 BackTrans 22.19 32.02 15.85 23.57 23.73 42.27 38.42 35.81 34.25 37.76 TA-NMT 20.59 33.22 14.64 24.45 23.23 40.85 39.06 34.52 34.39 37.21 TA-NMT(GI) 23.16 33.64 16.50 25.07 24.59 42.63 39.53 35.87 35.21 38.31 Table 3: Test BLEU on MultiUN Dataset. Method EN2HE HE2EN FR2HE HE2FR Ave EN2RO RO2EN FR2RO RO2FR Ave (X⇒Z) (Z⇒X) (Y⇒Z) (Z⇒Y) (X⇒Z) (Z⇒X) (Y⇒Z) (Z⇒Y) RNNSearch 17.94 28.32 11.86 21.67 19.95 31.44 40.63 17.34 25.20 28.65 PBSMT 17.39 28.05 12.77 21.87 20.02 31.51 39.98 18.13 25.47 28.77 T-S 17.97 28.42 12.04 21.99 20.11 31.80 40.86 17.94 25.69 29.07 BackTrans 18.69 28.55 12.31 21.63 20.20 32.18 41.03 18.19 25.30 29.18 TA-NMT 19.19 29.28 12.76 22.62 20.96 33.65 41.93 18.53 26.35 30.12 TA-NMT(GI) 19.90 29.94 13.54 23.25 21.66 34.41 42.61 19.30 26.53 30.71 Table 4: Test BLEU on IWSLT Dataset. for both (X, Z) and (Y, Z) by generating pseudo data from each other, leads up to 1.1 BLEU points improvement on average over RNNSearch. Compared with T-S, our method gains a further improvement of about 0.9 BLEU on average, because our method can better leverage the rich-resource pair (X, Y ). With extra large monolingual Z introduced, BackTrans can improve the performance of p(z|x) and p(z|y) significantly compared with all the methods without monolingual Z. However TA-NMT is comparable with or even better than BackTrans for p(x|z) and p(y|z) because both of the methods leverage resources from richresource pair (X, Y ), but BackTrans does not use the alignment information it provides. Moreover, with back-translation as good initialization, further improvement is achieved by TA-NMT(GI) of about 0.7 BLEU on average over BackTrans. In Table 4, we can draw the similar conclusion. However, different from MultiUN, in the EN-FR-HE group of IWSLT, (X, Z) and (Y, Z) are severely overlapped in Z. Therefore, T-S cannot improve the performance obviously (only about 0.2 BLEU) on RNNSearch because it fails to essentially double training data via the teacher model. As for EN-FR-RO, with the additionally introduced EN-RO data from Europarlv7, which has no overlap in RO with FR-RO, T-S can improve the average performance more than the ENFR-HE group. TA-NMT outperforms T-S by 0.93 BLEU on average. Note that even though BackTrans uses extra monolingual Z, the improvements are not so obvious as the former dataset, the reason for which we will delve into in the next subsection. Again, with back-translation as good initialization, TA-NMT(GI) can get the best result. Note that BLEU scores of TA-NMT are lower than BackTrans in the directions of X⇒Z and Y⇒Z. The reason is that the resources used by these two methods are different, as shown in Table 2. To do back translation in two directions (e.g., X⇒Z and Z⇒X), we need monolingual data from both sides (e.g., X and Z), however, in TA-NMT, the monolingual data of Z is not necessary. Therefore, in the translation of X⇒Z or Y⇒Z, BackTrans uses additional monolingual data of Z while TA-NMT does not, that is why BackTrans outperforms TA-NMT in these directions. Our method can leverage back translation as a good initialization, aka TA-NMT(GI) , and outperforms BackTrans on all translation directions. The average test BLEU scores of different methods in each data group (EN-FR-AR, EN-FRES, EN-FR-HE, and EN-FR-RO) are listed in the column Ave of the tables for clear comparison. 3.4 The Effect of Extra Monolingual Data Comparing the results of BackTrans and TANMT(GI) on both datasets, we notice the improvements of both methods on IWSLT are not as significant as MultiUN. We speculate the reason is the relatively less amount of monolingual Z we use in 62 the experiments on IWSLT as shown in Table 1. So we conduct the following experiment to verify the conjecture by changing the scale of monolingual Arabic data in the MultiUN dataset, of which the data utilization rates are set to 0%, 10%, 30%, 60% and 100% respectively. Then we compare the performance of BackTrans and TA-NMT(GI) in the EN-FR-AR group. As Figure 4 shows, the amount of monolingual Z actually has a big effect on the results, which can also verify our conjecture above upon the less significant improvement of BackTrans and TA-NMT(GI) on IWSLT. In addition, even with poor ”good-initialization”, TANMT(GI) still get the best results. Figure 4: Test BLEU of the EN-FR-AR group performed by BackTrans and TA-NMT(GI) with different amount of monolingual Arabic data. 3.5 EM Training Curves To better illustrate the behavior of our method, we print the training curves in both the M-steps and Esteps of TA-NMT and TA-NMT(GI) in Figure 5 above. The chosen models printed in this figure are EN2AR and AR2FR on MultiUN, and EN2RO and RO2FR on IWLST. From Figure 5, we can see that the two lowresource translation models are improved nearly simultaneously along with the training process, which verifies our point that two weak models could boost each other in our EM framework. Notice that at the early stage, the performance of all models stagnates for several iterations, especially of TA-NMT. The reason could be that the pseudo bilingual data and the true training data are heterogeneous, and it may take some time for the models to adapt to a new distribution which both models agree. Compared with TA-NMT, TA-NMT(GI) are more stable, because the models may have Figure 5: BLEU curves on validation sets during the training processes of TA-NMT and TANMT(GI). (Top: EN2AR (the E-step) and AR2FR (the M-step); Bottom: EN2RO (the E-step) and RO2FR (the M-step)) adapted to a mixed distribution of heterogeneous data in the preceding back-translation phase. 3.6 Reinforcement Learning Mechanism in Our Method As shown in Equation 9, the E-step actually works as a reinforcement learning (RL) mechanism. Models p(z|x) and p(z|y) generate samples by themselves and receive rewards to update their parameters. Note that the reward here is described by the log terms in Equation 9, which is derived from our EM algorithm rather than defined artificially. In Table 5, we do a case study of the EN2ES translation sampled by p(z|x) as well as its time-step rewards during the E-step. In the first case, the best translation of ”political” is ”pol´ıticos”. When the model p(z|x) generates an inaccurate one ”pol´ıticas”, it receives a negative reward (-0.01), with which the model parameters will be updated accordingly. In the sec63 Source in concluding , poverty eradication requires political will and commitment . Output en (0.66) conclusi´on (0.80) , (0.14) la (0.00) erradicaci´on (1.00) de (0.40) la (0.00) pobreza (0.90) requiere (0.10) voluntad (1.00) y (0.46) compromiso (0.90) pol´ıticas (-0.01) . (1.00) Reference en conclusi´on , la erradicaci´on de la pobreza necesita la voluntad y compromiso pol´ıticos . Source visit us and get to know and love berlin ! Output visita (0.00) y (0.05) se (0.00) a (0.17) saber (0.00) y (0.04) a (0.01) berl´ın (0.00) ! (0.00) Reference vis´ıtanos y llegar a saber y amar a berl´ın . Source legislation also provides an important means of recognizing economic , social and cultural rights at the domestic level . Output la (1.00) legislaci´on (0.34) tambin (1.00) constituye (0.60) un (1.00) medio (0.22) importante (0.74) de (0.63) reconocer (0.21) los (0.01) derechos (0.01) econmicos (0.03) , (0.01) sociales (0.02) y (0.01) culturales (1.00) a (0.00) nivel (0.40) nacional (1.00) . (0.03) Reference la legislaci´on tambi´en constituye un medio importante de reconocer los derechos econ´omicos , iales y culturales a nivel nacional . Table 5: English to Spanish translation sampled in the E-step as well as its time-step rewards. ond case, the output misses important words and is not fluent. Rewards received by the model p(z|x) are zero for nearly all tokens in the output, leading to an invalid updating. In the last case, the output sentence is identical to the human reference. The rewards received are nearly all positive and meaningful, thus the RL rule will update the parameters to encourage this translation candidate. 4 Related Work NMT systems, relying heavily on the availability of large bilingual data, result in poor translation quality for low-resource pairs (Zoph et al., 2016). This low-resource phenomenon has been observed in much preceding work. A very common approach is exploiting monolingual data of both source and target languages (Sennrich et al., 2015; Zhang and Zong, 2016; Cheng et al., 2016; Zhang et al., 2018; He et al., 2016). As a kind of data augmentation technique, exploiting monolingual data can enrich the training data for low-resource pairs. Sennrich et al. (2015) propose back-translation, exploits the monolingual data of the target side, which is then used to generate pseudo bilingual data via an additional target-to-source translation model. Different from back-translation, Zhang and Zong (2016) propose two approaches to use source-side monolingual data, of which the first is employing a self-learning algorithm to generate pseudo data, while the second is using two NMT models to predict the translation and to reorder the source-side monolingual sentences. As an extension to these two methods, Cheng et al. (2016) and Zhang et al. (2018) combine two translation directions and propose a training framework to jointly optimize the sourceto-target and target-to-source translation models. Similar to joint training, He et al. (2016) propose a dual learning framework with a reinforcement learning mechanism to better leverage monolingual data and make two translation models promote each other. All of these methods are concentrated on exploiting either the monolingual data of the source and target language or both of them. Our method takes a different angle but is compatible with existing approaches, we propose a novel triangular architecture to leverage two additional language pairs by introducing a third rich language. By combining our method with existing approaches such as back-translation, we can make a further improvement. Another approach for tackling the low-resource translation problem is multilingual neural machine translation (Firat et al., 2016), where different encoders and decoders for all languages with a shared attention mechanism are trained. This method tends to exploit the network architecture to relate low-resource pairs. Our method is different from it, which is more like a training method rather than network modification. 5 Conclusion In this paper, we propose a triangular architecture (TA-NMT) to effectively tackle the problem 64 of low-resource pairs translation with a unified bidirectional EM framework. By introducing another rich language, our method can better exploit the additional language pairs to enrich the original low-resource pair. Compared with the RNNSearch (Bahdanau et al., 2014), a teacherstudent alike method (Chen et al., 2017) and the back-translation (Sennrich et al., 2015) on the same data level, our method achieves significant improvement on the MutiUN and IWSLT2012 datasets. Note that our method can be combined with methods exploiting monolingual data for NMT low-resource problem such as backtranslation and make further improvements. In the future, we may extend our architecture to other scenarios, such as totally unsupervised training with no bilingual data for the rare language. Acknowledgments We thank Zhirui Zhang and Shuangzhi Wu for useful discussions. This work is supported in part by NSFC U1636210, 973 Program 2014CB340300, and NSFC 61421003. References Dzmitry Bahdanau, Kyunghyun Cho, and Yoshua Bengio. 2014. Neural machine translation by jointly learning to align and translate. arXiv preprint arXiv:1409.0473. Christopher M Bishop. 2006. Pattern recognition and machine learning. springer. Sean Borman. 2004. The expectation maximization algorithm-a short tutorial. Submitted for publication, pages 1–9. Mauro Cettolo, Christian Girardi, and Marcello Federico. 2012. Wit3: Web inventory of transcribed and translated talks. In Proceedings of the 16th Conference of the European Association for Machine Translation (EAMT), volume 261, page 268. Yun Chen, Yang Liu, Yong Cheng, and Victor OK Li. 2017. A teacher-student framework for zeroresource neural machine translation. arXiv preprint arXiv:1705.00753. Yong Cheng, Wei Xu, Zhongjun He, Wei He, Hua Wu, Maosong Sun, and Yang Liu. 2016. Semisupervised learning for neural machine translation. arXiv preprint arXiv:1606.04596. Andreas Eisele and Yu Chen. 2010. Multiun: A multilingual corpus from united nation documents. In Proceedings of the Seventh conference on International Language Resources and Evaluation, pages 2868–2872. European Language Resources Association (ELRA). Orhan Firat, Kyunghyun Cho, and Yoshua Bengio. 2016. Multi-way, multilingual neural machine translation with a shared attention mechanism. arXiv preprint arXiv:1601.01073. Di He, Yingce Xia, Tao Qin, Liwei Wang, Nenghai Yu, Tieyan Liu, and Wei-Ying Ma. 2016. Dual learning for machine translation. In Advances in Neural Information Processing Systems, pages 820–828. S´ebastien Jean, Orhan Firat, Kyunghyun Cho, Roland Memisevic, and Yoshua Bengio. 2015. Montreal neural machine translation systems for wmt’15. In WMT@ EMNLP, pages 134–140. Nal Kalchbrenner and Phil Blunsom. 2013. Recurrent continuous translation models. In EMNLP, volume 3, page 413. Yoon Kim and Alexander M Rush. 2016. Sequencelevel knowledge distillation. arXiv preprint arXiv:1606.07947. Philipp Koehn. 2005. Europarl: A parallel corpus for statistical machine translation. In MT summit, volume 5, pages 79–86. Philipp Koehn, Franz Josef Och, and Daniel Marcu. 2003. Statistical phrase-based translation. In Proceedings of the 2003 Conference of the North American Chapter of the Association for Computational Linguistics on Human Language TechnologyVolume 1, pages 48–54. Association for Computational Linguistics. Minh-Thang Luong, Hieu Pham, and Christopher D Manning. 2015. Effective approaches to attentionbased neural machine translation. arXiv preprint arXiv:1508.04025. Geoffrey McLachlan and Thriyambakam Krishnan. 2007. The EM algorithm and extensions, volume 382. John Wiley & Sons. Rico Sennrich, Alexandra Birch, Anna Currey, Ulrich Germann, Barry Haddow, Kenneth Heafield, Antonio Valerio Miceli Barone, and Philip Williams. 2017. The university of edinburgh’s neural mt systems for wmt17. arXiv preprint arXiv:1708.00726. Rico Sennrich, Barry Haddow, and Alexandra Birch. 2015. Improving neural machine translation models with monolingual data. arXiv preprint arXiv:1511.06709. Rico Sennrich, Barry Haddow, and Alexandra Birch. 2016. Edinburgh neural machine translation systems for wmt 16. arXiv preprint arXiv:1606.02891. Shiqi Shen, Yong Cheng, Zhongjun He, Wei He, Hua Wu, Maosong Sun, and Yang Liu. 2015. Minimum risk training for neural machine translation. arXiv preprint arXiv:1512.02433. 65 Ilya Sutskever, Oriol Vinyals, and Quoc V Le. 2014. Sequence to sequence learning with neural networks. In Advances in neural information processing systems, pages 3104–3112. Yonghui Wu, Mike Schuster, Zhifeng Chen, Quoc V Le, Mohammad Norouzi, Wolfgang Macherey, Maxim Krikun, Yuan Cao, Qin Gao, Klaus Macherey, et al. 2016. Google’s neural machine translation system: Bridging the gap between human and machine translation. arXiv preprint arXiv:1609.08144. Jiajun Zhang and Chengqing Zong. 2016. Exploiting source-side monolingual data in neural machine translation. In EMNLP, pages 1535–1545. Zhirui Zhang, Shujie Liu, Mu Li, Ming Zhou, and Enhong Chen. 2018. Joint training for neural machine translation models with monolingual data. In AAAI. Barret Zoph, Deniz Yuret, Jonathan May, and Kevin Knight. 2016. Transfer learning for lowresource neural machine translation. arXiv preprint arXiv:1604.02201.
2018
6
Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Long Papers), pages 643–653 Melbourne, Australia, July 15 - 20, 2018. c⃝2018 Association for Computational Linguistics 643 The price of debiasing automatic metrics in natural language evaluation Arun Tejasvi Chaganty∗and Stephen Mussmann∗and Percy Liang Computer Science Department, Stanford University {chaganty,mussmann,pliang}@cs.stanford.edu Abstract For evaluating generation systems, automatic metrics such as BLEU cost nothing to run but have been shown to correlate poorly with human judgment, leading to systematic bias against certain model improvements. On the other hand, averaging human judgments, the unbiased gold standard, is often too expensive. In this paper, we use control variates to combine automatic metrics with human evaluation to obtain an unbiased estimator with lower cost than human evaluation alone. In practice, however, we obtain only a 7– 13% cost reduction on evaluating summarization and open-response question answering systems. We then prove that our estimator is optimal: there is no unbiased estimator with lower cost. Our theory further highlights the two fundamental bottlenecks—the automatic metric and the prompt shown to human evaluators— both of which need to be improved to obtain greater cost savings. 1 Introduction In recent years, there has been an increasing interest in tasks that require generating natural language, including abstractive summarization (Nallapati et al., 2016), open-response question answering (Nguyen et al., 2016; Koˇcisky et al., 2017), image captioning (Lin et al., 2014), and open-domain dialogue (Lowe et al., 2017b). Unfortunately, the evaluation of these systems remains a thorny issue because of the diversity of possible correct responses. As the gold standard of performing human evaluation is often too expensive, there has been a large effort develop∗Authors contributed equally. ing automatic metrics such as BLEU (Papineni et al., 2002), ROUGE (Lin and Rey, 2004), METEOR (Lavie and Denkowski, 2009; Denkowski and Lavie, 2014) and CiDER (Vedantam et al., 2015). However, these have shown to be biased, correlating poorly with human metrics across different datasets and systems (Liu et al., 2016b; Novikova et al., 2017). Can we combine automatic metrics and human evaluation to obtain an unbiased estimate at lower cost than human evaluation alone? In this paper, we propose a simple estimator based on control variates (Ripley, 2009), where we average differences between human judgments and automatic metrics rather than averaging the human judgments alone. Provided the two are correlated, our estimator will have lower variance and thus reduce cost. We prove that our estimator is optimal in the sense that no unbiased estimator using the same automatic metric can have lower variance. We also analyze its data efficiency (equivalently, cost savings)—the factor reduction in number of human judgments needed to obtain the same accuracy versus naive human evaluation—and show that it depends solely on two factors: (a) the annotator variance (which is a function of the human evaluation prompt) and (b) the correlation between human judgments and the automatic metric. This factorization allows us to calculate typical and best-case data efficiencies and accordingly refine the evaluation prompt or automatic metric. Finally, we evaluate our estimator on stateof-the-art systems from two tasks, summarization on the CNN/Daily Mail dataset (Hermann et al., 2015; Nallapati et al., 2016) and openresponse question answering on the MS MARCOv1.0 dataset (Nguyen et al., 2016). To study our estimators offline, we preemptively collected 10,000 human judgments which cover several 644 0.5 0.6 0.7 0.8 Human judgement 0.2 0.3 0.4 ROUGE-L fastqa fastqa ext snet snet.ens (a) System-level correlation on the MS MARCO task 0.0 0.5 1.0 Human judgment 0.0 0.2 0.4 0.6 0.8 1.0 ROUGE-L (b) Instance-level correlation for the fastqa system Figure 1: (a) At a system-level, automatic metrics (ROUGE-L) and human judgment correlate well, but (b) the instance-level correlation plot (where each point is a system prediction) shows that the instancelevel correlation is quite low (ρ = 0.31). As a consequence, if we try to locally improve systems to produce better answers (▷in (a)), they do not significantly improve ROUGE scores and vice versa (△). tasks and systems.1 As predicted by the theory, we find that the data efficiency depends not only on the correlation between the human and automatic metrics, but also on the evaluation prompt. If the automatic metric had perfect correlation, our data efficiency would be around 3, while if we had noiseless human judgments, our data efficiency would be about 1.5. In reality, the reduction in cost we obtained was only about 10%, suggesting that improvements in both automatic metric and evaluation prompt are needed. As one case study in improving the latter, we show that, when compared to a Likert survey, measuring the amount of post-editing needed to fix a generated sentence reduced the annotator variance by three-fold. 2 Bias in automatic evaluation It is well understood that current automatic metrics tend to correlate poorly with human judgment at the instance-level. For example, Novikova et al. (2017) report correlations less than 0.3 for a large suite of word-based and grammar-based evaluation methods on a generation task. Similarly, Liu et al. (2016b) find correlations less than 0.35 for automatic metrics on a dialog generation task in one domain, but find correlations with the same metric dropped significantly to less than 0.16 when used in another domain. Still, somewhat surprisingly, several automatic metrics 1An anonymized version of this data and the annotation interfaces used can be found at https://bit.ly/ price-of-debiasing. have been found to have high system-level correlations (Novikova et al., 2017). What, then, are the implications of having a low instance-level correlation? As a case study, consider the task of openresponse question answering: here, a system receives a human-generated question and must generate an answer from some given context, e.g. a document or several webpages. We collected the responses of several systems on the MS MARCOv1 dataset (Nguyen et al., 2016) and crowdsourced human evaluations of the system output (see Section 4 for details). The instance-level correlation (Figure 1b) is only ρ = 0.31. A closer look at the instance-level correlation reveals that while ROUGE is able to correctly assign low scores to bad examples (lower left), it is bad at judging good examples and often assigns them low ROUGE scores (lower right)— see Table 1 for examples. This observation agrees with a finding reported in Novikova et al. (2017) that automatic metrics correlate better with human judgments on bad examples than average or good examples. Thus, as Figure 1(a) shows, we can improve low-scoring ROUGE examples without improving their human judgment (△) and vice versa (▷). Indeed, Conroy and Dang (2008) report that summarization systems were optimized for ROUGE during the DUC challenge (Dang, 2006) until they were indistinguishable from the ROUGE scores of human-generated summaries, but the systems 645 Question and reference answer System answer (System; Corr / ROUGE-L) Examples where system is correct and ROUGE-L > 0.5 (19.6% or 285 of 1455 unique responses) Q. what is anti-mullerian hormone A. Anti-Mullerian Hormone (AMH) is a protein hormone produced by granulosa cells (cells lining the egg sacs or follicles) within the ovary. it is a protein hormone produced by granulosa cells (cells lining the egg sacs or follicles) within the ovary. (snet.ens; ✓/ 0.86) Examples where system is incorrect and ROUGE-L > 0.5 (1.3% or 19 of 1455 unique responses) Q. at what gestational age can you feel a fetus move A. 37 to 41 weeks (incorrect reference answer) 37 to 41 weeks (fastqa, fastqa.ext; × / 1.0) Examples where system is correct and ROUGE-L < 0.5 (56.0% or 815 of 1455 unique responses) Q. what is the definition of onomatopoeia A. It is defined as a word, which imitates the natural sounds of a thing. the naming of a thing or action by a vocal imitation of the sound associated with it (as buzz, hiss). (fastqa; ✓/ 0.23) Examples where system is incorrect and ROUGE-L < 0.5 (23.1% or 336 of 1455 unique responses) Q. what kind root stem does a dandelion have A. Fibrous roots and hollow stem. vitamin a, vitamin c, vitamin d and vitamin b complex, as well as zinc, iron and potassium. (snet, snet.ens; × / 0.09) (a) MS MARCO. Human annotators rated answer correctness (AnyCorrect) and the automatic metric used is ROUGE-L (higher is better). Reference summary System summary (System; Edit / VecSim) Examples where system Edit < 0.3 and VecSim > 0.5 (53.9% or 1078 of 2000 responses) Bhullar is set to sign a ■-day contract with the Kings. The ■-year-old will become the NBA’s first player of Indian descent. Bhullar will be on the roster when the Kings host New Orleans Pelicans. Bhullar andThe Kings are signing Bhullar to a ■-day contract. The ■-year-old will be on the roster on friday when David Wear’s ■-season contract expires thursday. Bhullar is set to become the NBA’s first player of Indian descent. (ml; 0.13 / 0.82) Examples where system Edit > 0.3 and VecSim > 0.5 (18.0% or 360 of 2000 responses) The Direct Marketing Commission probing B2C Data and Data Bubble. Investigating whether they breached rules on the sale of private data. Chief commissioner described allegations made about firms as ‘serious’. ■Data obtained by the Mail’s marketing commission said it would probe both companies over claims that they had breached the rules on the sale of private data. The FSA said it would probe both companies over claims they had breached the rules on the sale of private data. (se2seq; 1.00 / 0.72) Examples where system Edit < 0.3 and VecSim < 0.5 (14.5% or 290 of 2000 responses) Death toll rises to more than ■. Pemba Tamang, ■, shows no apparent signs of serious injury after rescue. Americans special forces helicopter ■, including ■ Americans, to safety. Six of Despite Nepal’s tragedy, life triumphed in Kathmandu’s hard-hit neighborhoods. Rescuers pulled an 15-year-old from the rubble of a multistory residential building. He was wearing a New York shirt and a blue neck brace. (pointer; 0.04 / 0.27) Examples where system Edit > 0.3 and VecSim < 0.5 (13.6% or 272 of 2000 responses) “Mad Men’s” final seven episodes begin airing April ■. The show has never had high ratings but is considered one of the great TV series. It’s unknown what will happen to characters, but we can always guess. ‘This’s “Mad Men” is the end of a series of an era’, This he says. Stores have created fashion lines inspired by the show.“The Sopranos”. The in ■the Kent State shootings in may ■or Richard Nixon´s ■re-election.. (ml+rl; 0.95 / 0.24) (b) CNN/Daily Mail. Human judgment scores used are post-edit distance (Edit) (lower is better) and the automatic metric used is sentence vector similarity with the reference (higher is better). Table 1: Examples highlighting the different modes in which the automatic metric and human judgments may agree or disagree. On the MS MARCO task, a majority of responses from systems were actually correct but poorly scored according to ROUGE-L. On the CNN/Daily Mail task, a significant number of examples which are scored highly by VecSim are poorly rated by humans, and likewise many examples scored poorly by VecSim are highly rated by humans. 646 had hardly improved on human evaluation. Hillclimbing on ROUGE can also lead to a system that does worse on human scores, e.g. in machine translation (Wu et al., 2016). Conversely, genuine quality improvements might not be reflected in improvements in ROUGE. This bias also appears in pool-based evaluation for knowledge base population (Chaganty et al., 2017). Thus the problems with automatic metrics clearly motivate the need for human evaluation, but can we still use the automatic metrics somehow to save costs? 3 Statistical estimation for unbiased evaluation We will now formalize the problem of combining human evaluation with an automatic metric. Let X be a set of inputs (e.g., articles), and let S be the system (e.g. for summarization), which takes x ∈X and returns output S(x) (e.g. a summary). Let Z = {(x, S(x)) : x ∈X} be the set of system predictions. Let Y (z) be the random variable representing the human judgment according to some evaluation prompt (e.g. grammaticality or correctness), and define f(z) = E[Y (z)] to be the (unknown) human metric corresponding to averaging over an infinite number of human judgments. Our goal is to estimate the average across all examples: µ def = Ez[f(z)] = 1 |Z| X z∈Z f(z) (1) with as few queries to Y as possible. Let g be an automatic metric (e.g. ROUGE), which maps z to a real number. We assume evaluating g(z) is free. The central question is how to use g in conjunction with calls to Y to produce an unbiased estimate ˆµ (that is, E[ˆµ] = µ). In this section, we will construct a simple estimator based on control variates (Ripley, 2009), and prove that it is minimax optimal. 3.1 Sample mean We warm up with the most basic unbiased estimate, the sample mean. We sample z(1), . . . , z(n) independently with replacement from Z. Then, we sample each human judgment y(i) = Y (z(i)) independently.2 Define the estimator to be ˆµmean = 1 n Pn i=1 y(i). Note that ˆµmean is unbiased (E[ˆµmean] = µ). 2Note that this independence assumption isn’t quite true in practice since we do not control who annotates our data. We can define σ2 f def = Var(f(z)) as the variance of the human metric and σ2 a def = Ez[Var(Y (z))] as the variance of human judgment averaged over Z. By the law of total variance, the variance of our estimator is Var(ˆµmean) = 1 n(σ2 f + σ2 a). (2) 3.2 Control variates estimator Now let us see how an automatic metric g can reduce variance. If there is no annotator variance (σ2 a = 0) so that Y (z) = f(z), we should expect the variance of f(z) −g(z) to be lower than the variance of f(z), assuming g is correlated with f—see Figure 2 for an illustration. The actual control variates estimator needs to handle noisy Y (z) (i.e. σ2 a > 0) and guard against a g(z) with low correlation. Let us standardize g to have zero mean and unit variance, because we have assumed it is free to evaluate. As before, let z(1), . . . , z(n) be independent samples from Z and draw y(i) = Y (z(i)) independently as well. We define the control variates estimator as ˆµcv = 1 n n X i=1 y(i) −αg(z(i)), (3) where α def = Cov(f(z), g(z)). (4) Intuitively, we have averaged over y(i) to handle the noise introduced by Y (z), and scaled g(z) to prevent an uncorrelated automatic metric from introducing too much noise. An important quantity governing the quality of an automatic metric g is the correlation between f(z) and g(z) (recall that g has unit variance): ρ def = α σf . (5) We can show that among all distributions with fixed σ2 f, σ2 a, and α (equivalently ρ), this estimator is minimax optimal, i.e. it has the least variance among all unbiased estimators: Theorem 3.1. Among all unbiased estimators that are functions of y(i) and g(z(i)), and for all distributions with a given σ2 f, σ2 a, and α, Var(ˆµcv) = 1 n(σ2 f(1 −ρ2) + σ2 a), (6) and no other estimator has a lower worst-case variance. 647 Samples of 𝑓(𝑧) Samples of 𝑓𝑧−𝑔(𝑧) 𝑧 𝑓(𝑧) 𝑔(𝑧) 𝜇 Figure 2: The samples from f(z) have a higher variance than the samples from f(z) −g(z) but the same mean. This is the key idea behind using control variates to reduce variance. 0.00 0.25 0.50 0.75 1.00 Normalized annotator variance (γ) 0.0 0.2 0.4 0.6 0.8 1.0 Automatic metric correlation (ρ) 0.0 0.2 0.4 0.6 0.8 1.0 Inverse data efficiency Figure 3: Inverse data efficiency for various values of γ and ρ. We need both low γ and high ρ to obtain significant gains. Comparing the variances of the two estimators ((2) and (6)), we define the data efficiency as the ratio of the variances: DE def = Var(ˆµmean) Var(ˆµcv) = 1 + γ 1 −ρ2 + γ , (7) where γ def = σ2 a/σ2 f is the normalized annotator variance. Data efficiency is the key quantity in this paper: it is the multiplicative reduction in the number of samples required when using the control variates estimator ˆµcv versus the sample mean ˆµmean. Figure 3 shows the inverse data efficiency contours as a function of the correlation ρ and γ. When there is no correlation between human and automatic metrics (ρ = 0), the data efficiency is naturally 1 (no gain). In order to achieve a data efficiency of 2 (half the labeling cost), we need |ρ| ≥ √ 2/2 ≈0.707. Interestingly, even for an automatic metric with perfect correlation (ρ = 1), the data efficiency is still capped by 1+γ γ : unless γ →0 the data efficiency cannot increase unboundedly. Intuitively, even if we knew that ρ = 1, f(z) would be undetermined up to a constant additive shift and just estimating the shift would incur a variance of 1 nσ2 a. 3.3 Using the control variates estimator The control variates estimator can be easily integrated into an existing evaluation: we run human evaluation on a random sample of system outputs, automatic evaluation on all the system outputs, and plug in these results into Algorithm 1. It is vital that we are able to evaluate the automatic metric on a significantly larger set of examples than those with human evaluations to reliably normalize g(z): without these additional examples, it be can shown that the optimal minimax estimator for µ is simply the naive estimate ˆµmean. Intuitively, this is because estimating the mean of g(z) incurs an equally large variance as estimating µ. In other words, g(z) is only useful if we have additional information about g beyond the samples {z(i)}. Algorithm 1 shows the estimator. In practice, we do not know α = Cov(f(z), g(z)), so we use a plug-in estimate ˆα in line 3 to compute the estimate eµ in line 4. We note that estimating α from data does introduce a O(1/n) bias, but when compared to the standard deviation which decays as Θ(1/√n), this bias quickly goes to 0. Proposition 3.1. The estimator eµ in Algorithm 1 has O(1/n) bias. Algorithm 1 Control variates estimator 1: Input: n human evaluations y(i) on system outputs z(i), normalized automatic metric g 2: y = 1 n P i y(i) 3: ˆα = 1 n P i(y(i) −y)g(z(i)) 4: eµ = 1 n P i y(i) −ˆαg(z(i)) 5: return eµ An additional question that arises when applying Algorithm 1 is figuring out how many samples n to use. Given a target variance, the number of samples can be estimated using (6) with conservative estimates of σ2 f, σ2 a and ρ. Alternatively, our estimator can be combined with a dynamic stopping rule (Mnih et al., 2008) to stop data collection once we reach a target confidence interval. 648 Task Eval. σ2 a σ2 f γ = σ2 a σ2 f CDM Fluency 0.32 0.26 1.23 CDM Redund. 0.26 0.43 0.61 CDM Overall 0.28 0.28 1.00 CDM Edit 0.07 0.18 0.36 MS MARCO AnyCorr. 0.14 0.15 0.95 MS MARCO AvgCorr. 0.12 0.13 0.91 Table 2: A summary of the key statistics, human metric variance (σ2 f) and annotator variance (σ2 a) for different datasets, CNN/Daily Mail (CDM) and MS MARCO in our evaluation benchmark. We observe that the relative variance (γ) is fairly high for most evaluation prompts, upper bounding the data efficiency on these tasks. A notable exception is the Edit prompt wherein systems are compared on the number of post-edits required to improve their quality. 3.4 Discussion of assumptions We will soon see that empirical instantiations of γ and ρ lead to rather underwhelming data efficiencies in practice. In light of our optimality result, does this mean there is no hope for gains? Let us probe our assumptions. We assumed that the human judgments are uncorrelated across different system outputs; it is possible that a more accurate model of human annotators (e.g. Passonneau and Carpenter (2014)) could offer improvements. Perhaps with additional information about g(z) such as calibrated confidence estimates, we would be able to sample more adaptively. Of course the most direct routes to improvement involve increasing the correlation of g with human judgments and reducing annotator variance, which we will discuss more later. 4 Tasks and datasets In order to compare different approaches to evaluating systems, we first collected human judgments for the output of several automatic summarization and open-response question answering systems using Amazon Mechanical Turk. Details of instructions provided and quality assurance steps taken are provided in Appendix A of the supplementary material. In this section, we’ll briefly describe how we collected this data. Evaluating language quality in automatic summarization. In automatic summarization, systems must generate a short (on average two or three sentence) summary of an article: for our study, we chose articles from the CNN/Daily Mail (CDM) dataset (Hermann et al., 2015; Nallapati et al., 2016) which come paired with reference summaries in the form of story highlights. We focus on the language quality of summaries and leave evaluating content selection to future work. For each summary, we collected human judgments on a scale from 1–3 (Figure 4a) for fluency, (lack of) redundancy, and overall quality of the summary using guidelines from the DUC summarization challenge (Dang, 2006). As an alternate human metric, we also asked workers to postedit the system’s summary to improve its quality, similar to the post-editing step in MT evaluations (Snover et al., 2006). Obtaining judgments costs about $0.15 per summary and this cost rises to about $0.40 per summary for post-editing. We collected judgments on the summaries generated by the seq2seq and pointer models of See et al. (2017), the ml and ml+rl models of Paulus et al. (2018), and the reference summaries.3 Before presenting the summaries to human annotators, we performed some minimal post-processing: we true-cased and de-tokenized the output of seq2seq and pointer using Stanford CoreNLP (Manning et al., 2014) and replaced “unknown” tokens in each system with a special symbol (■). Evaluating answer correctness. Next, we look at evaluating the correctness of system outputs in question answering using the MS MARCO question answering dataset (Nguyen et al., 2016). Here, each system is provided with a question and up to 10 paragraphs of context. The system generates open-response answers that do not need to be tied to a span in any paragraph. We first ask annotators to judge if the output is even plausible for the question, and if yes, ask them identify if it is correct according to each context paragraph. We found that requiring annotators to highlight regions in the text that support their decision substantially improved the quality of the output without increasing costs. Annotations cost $0.40 per system response.4 3All system output was obtained from the original authors through private communication. 4This cost could be significantly reduced if systems also 649 (a) Interface to evaluate language quality on CNN/Daily Mail (b) Interface to judge answer correctness on MS MARCO Figure 4: Screenshots of the annotation interfaces we used to measure (a) summary language quality on CNN/Daily Mail and (b) answer correctness on MS MARCO tasks. While our goal is to evaluate the correctness of the provided answer, we found that there are often answers which may be correct or incorrect depending on the context. For example, the question “what is a pothole” is typically understood to refer to a hole in a roadway, but also refers to a geological feature (Figure 4b). This is reflected when annotators mark one context paragraph to support the given answer but mark another to contradict it. We evaluated systems based on both the average correctness (AvgCorrect) of their answers across all paragraphs as well as whether their answer is correct according to any paragraph (AnyCorrect). We collected annotations on the systems generated by the fastqa and fastqa ext from Weissenborn et al. (2017) and the snet and snet.ens(emble) models from Tan et al. (2018), along with reference answers. The answers generated by the systems were used without any postprocessing. Surprisingly, we found that the correctness of the reference answers (according to the AnyCorrect metric) was only 73.5%, only 2% above that of the leading system (snet.ens). We manually inspected 30 reference answers which were annotated incorrectly and found that of those, about 95% were indeed incorrect. However, 62% are actually answerable from some paragraph, indicating that the real ceiling performance on this dataset is around 90% and that there is still room for improvement on this task. 5 Experimental results We are now ready to evaluate the performance of our control variates estimator proposed in Section 3 using the datasets presented in Section 4. specify which passage they used to generate the answer. Recall that our primary quantity of interest is data efficiency, the ratio of the number of human judgments required to estimate the overall human evaluation score for the control variates estimator versus the sample mean. We’ll briefly review the automatic metrics used in our evaluation before analyzing the results. Automatic metrics. We consider the following frequently used automatic word-overlap based metrics in our work: BLEU (Papineni et al., 2002), ROUGE (Lin and Rey, 2004) and METEOR (Lavie and Denkowski, 2009). Following Novikova et al. (2017) and Liu et al. (2016b), we also compared a vector-based sentence-similarity using sent2vec (Pagliardini et al., 2017) to compare sentences (VecSim). Figure 5 shows how each of these metrics is correlated with human judgment for the systems being evaluated. Unsurprisingly, the correlation varies considerably across systems, with token-based metrics correlating more strongly for systems that are more extractive in nature (fastqa and fastqa ext). Results.5 In Section 3 we proved that the control variates estimator is not only unbiased but also has the least variance among other unbiased estimators. Figure 6 plots the width of the 80% confidence interval, estimated using bootstrap, measured as a function of the number of samples collected for different tasks and prompts. As expected, the control variates estimator reduces the width of the confidence interval. We measure data efficiency by the averaging of the ratio of squared confidence intervals between the human baseline 5Extended results for other systems, metrics and prompts can be found at https://bit.ly/ price-of-debiasing/. 650 fastqa fastqa ext snet snet.ens All Systems ROUGE-L ROUGE-1 ROUGE-2 METEOR BLEU-2 VecSim Metrics 0.10 0.15 0.20 0.25 0.30 0.35 0.40 0.45 0.50 Pearson ρ (a) MS MARCO with the AnyCorrect prompt seq2seq pointer ml ml+rl All Systems ROUGE-L ROUGE-1 ROUGE-2 METEOR BLEU-2 VecSim Metrics 0.10 0.15 0.20 0.25 0.30 0.35 0.40 0.45 0.50 Pearson ρ (b) CNN/Daily Mail with the Edit prompt Figure 5: Correlations of different automatic metrics on the MS MARCO and CNN/Daily Mail tasks. Certain systems are more correlated with certain automatic metrics than others, but overall the correlation is low to moderate for most systems and metrics. and control variates estimates. We observe that the data efficiency depends on the task, prompt and system, ranging from about 1.08 (a 7% cost reduction) to 1.15 (a 13% cost reduction) using current automatic metrics. As we showed in Section 3, further gains are fundamentally limited by the quality of the evaluation prompts and automatic metrics. Figures 6a and 6b show how improving the quality of the evaluation prompt from a Likert-scale prompt for quality (Overall) to using post-editing (Edit) noticeably decreases variance and hence allows better automatic metrics to increase data efficiency. Likewise, Figure 6c shows how using a better automatic metric (ROUGE-L instead of VecSim) also reduces variance. Figure 6 also shows the conjectured confidence intervals if we were able to eliminate noise in human judgments (noiseless humans) or have a automatic metric that correlated perfectly with average human judgment (perfect metric). In particular, we use the mean of all (2–3) humans on each z for the perfect g(z) and use the mean of all humans on each z for the “noiseless” Y (z). In both cases, we are able to significantly increase data efficiency (i.e. decrease estimator variance). With zero annotator variance and using existing automatic metrics, the data efficiency ranges from 1.42 to 1.69. With automatic metrics with perfect correlation and current variance of human judgments, it ranges from 2.38 to 7.25. Thus, we conclude that it is important not only to improve our automatic metrics but also the evaluation prompts we use during human evaluation. 6 Related work In this work, we focus on using existing automatic metrics to decrease the cost of human evaluations. There has been much work on improving the quality of automatic metrics. In particular, there is interest in learning models (Lowe et al., 2017a; Dusek et al., 2017) that are able to optimize for improved correlations with human judgment. However, in our experience, we have found that these learned automatic metrics have trouble generalizing to different systems. The framework we provide allows us to safely incorporate such models into evaluation, exploiting them when their correlation is high but also not introducing bias when it is low. Our key technical tool is control variates, a standard statistical technique used to reduce the variance of Monte Carlo estimates (Ripley, 2009). The technique has also been used in machine learning and reinforcement learning to lower variance estimates of gradients (Greensmith et al., 2004; Paisley et al., 2012; Ranganath et al., 2014). To the best of our knowledge, we are the first to apply this technique in the context of language evaluation. Our work also highlights the importance of human evaluation. Chaganty et al. (2017) identified a similar problem of systematic bias in evaluation metrics in the setting of knowledge base population and also propose statistical estimators that relies on human evaluation to correct bias. Unfortunately, their technique relies on having a structured output (relation triples) that are shared between 651 0 100 200 300 400 500 Number of samples 0.06 0.08 0.10 0.12 0.14 0.16 0.18 0.20 80% confidence interval Humans Humans + VecSim Noiseless humans + VecSim Humans + perfect metric (a) seq2seq on CNN/Daily Mail using the Overall 0 100 200 300 400 500 Number of samples 0.06 0.08 0.10 0.12 0.14 0.16 0.18 0.20 80% confidence interval Humans Humans + VecSim Noiseless humans + VecSim Humans + perfect metric (b) seq2seq on CNN/Daily Mail using Edit 0 100 200 300 400 500 Number of samples 0.050 0.075 0.100 0.125 0.150 0.175 0.200 80% confidence interval Humans Humans + VecSim Humans + ROUGE-1 Humans + perfect metric (c) fastqa ext on MS MARCO using AnyCorrect Figure 6: 80% bootstrap confidence interval length as a function of the number of human judgments used when evaluating the indicated systems on their respective datasets and prompts. (a) We see a modest reduction in variance (and hence cost) relative to human evaluation by using the VecSim automatic metric with the proposed control variates estimator to estimate Overall scores on the CNN/Daily Mail task; the data efficiency (DE) is 1.06. (b) By improving the evaluation prompt to use Edits instead, it is possible to further reduce variance relative to humans (DE is 1.15). (c) Another way to reduce variance relative to humans is to improve the automatic metric evaluation; here using ROUGE-1 instead of VecSim improves the DE from 1.03 to 1.16. systems and does not apply to evaluating natural language generation. In a similar vein, Chang et al. (2017) dynamically collect human feedback to learn better dialog policies. 7 Discussion Prior work has shown that existing automatic metrics have poor instance-level correlation with mean human judgment and that they score many good quality responses poorly. As a result, the evaluation is systematically biased against genuine system improvements that would lead to higher human evaluation scores but not improve automatic metrics. In this paper, we have explored using an automatic metric to decrease the cost of human evaluation without introducing bias. In practice, we find that with current automatic metrics and evaluation prompts data efficiencies are only 1.08–1.15 (7–13% cost reduction). Our theory shows that further improvements are only possible by improving the correlation of the automatic metric and reducing the annotator variance of the evaluation prompt. As an example of how evaluation prompts could be improved, we found that using post-edits of summarizes decreased normalized annotator variance by a factor of three relative to using a Likert scale survey. It should be noted that changing the evaluation prompt also changes the underlying ground truth f(z): it is up to us to find a prompt that still captures the essence of what we want to measure. Without making stronger assumptions, the control variates estimator we proposed outlines the limitations of unbiased estimation. Where do we go from here? Certainly, we can try to improve the automatic metric (which is potentially as difficult as solving the task) and brainstorming alternative ways of soliciting evaluation (which has been less explored). Alternatively, we could give up on measuring absolute scores, and seek instead to find techniques stably rank methods and thus improve them. As the NLP community tackles increasingly difficult tasks, human evaluation will only become more important. We hope our work provides some clarity on to how to make it more cost effective. Reproducibility All code, data, and experiments for this paper are available on the CodaLab platform at https:// bit.ly/price-of-debiasing. Acknowledgments We are extremely grateful to the authors of the systems we evaluated for sharing their systems’ output with us. We also would like to thank Urvashi Khandelwal and Peng Qi for feedback on an earlier draft of the paper, the crowdworkers on Amazon Mechanical Turk and TurkNation for their work and feedback during the data collection process, and the anonymous reviewers for their constructive feedback. 652 References A. Chaganty, A. Paranjape, P. Liang, and C. Manning. 2017. Importance sampling for unbiased ondemand evaluation of knowledge base population. In Empirical Methods in Natural Language Processing (EMNLP). C. Chang, R. Yang, L. Chen, X. Zhou, and K. Yu. 2017. Affordable on-line dialogue policy learning. In Empirical Methods in Natural Language Processing (EMNLP). pages 223–231. J. M. Conroy and H. T. Dang. 2008. Mind the gap : Dangers of divorcing evaluations of summary content from linguistic quality. In International Conference on Computational Linguistics (COLING). pages 145–152. H. T. Dang. 2006. Overview of DUC 2006. In Document Understanding Conference. M. Denkowski and A. Lavie. 2014. Meteor universal: Language specific translation evaluation for any target language. In Workshop on Statistical Machine Translation. O. Dusek, J. Novikova, and V. Rieser. 2017. Referenceless quality estimation for natural language generation. arXiv . E. Greensmith, P. L. Bartlett, and J. Baxter. 2004. Variance reduction techniques for gradient estimates in reinforcement learning. Journal of Machine Learning Research (JMLR) 5:1471–1530. K. M. Hermann, T. Koisk, E. Grefenstette, L. Espeholt, W. Kay, M. Suleyman, and P. Blunsom. 2015. Teaching machines to read and comprehend. In Advances in Neural Information Processing Systems (NIPS). T. Koˇcisky, J. Schwarz, P. Blunsom, C. Dyer, K. M. Hermann, G. Melis, and E. Grefenstette. 2017. The NarrativeQA reading comprehension challenge. arXiv preprint arXiv:1712.07040 . A. Lavie and M. Denkowski. 2009. The meteor metric for automatic evaluation of machine translation. Machine Translation 23. C. Lin and M. Rey. 2004. Looking for a few good metrics: ROUGE and its evaluation. In NTCIR Workshop. T. Lin, M. Maire, S. Belongie, J. Hays, P. Perona, D. Ramanan, P. Doll’ar, and C. L. Zitnick. 2014. Microsoft COCO: Common objects in context. In European Conference on Computer Vision (ECCV). pages 740–755. A. Liu, S. Soderland, J. Bragg, C. H. Lin, X. Ling, and D. S. Weld. 2016a. Effective crowd annotation for relation extraction. In North American Association for Computational Linguistics (NAACL). pages 897– 906. C. Liu, R. Lowe, I. V. Serban, M. Noseworthy, L. Charlin, and J. Pineau. 2016b. How NOT to evaluate your dialogue system: An empirical study of unsupervised evaluation metrics for dialogue response generation. In Empirical Methods in Natural Language Processing (EMNLP). R. Lowe, M. Noseworthy, I. V. Serban, N. AngelardGontier, Y. Bengio, and J. Pineau. 2017a. Towards an automatic turing test: Learning to evaluate dialogue responses. In Association for Computational Linguistics (ACL). R. T. Lowe, N. Pow, I. Serban, L. Charlin, C. Liu, and J. Pineau. 2017b. Training end-to-end dialogue systems with the ubuntu dialogue corpus. Dialogue and Discourse 8. C. D. Manning, M. Surdeanu, J. Bauer, J. Finkel, S. J. Bethard, and D. McClosky. 2014. The stanford coreNLP natural language processing toolkit. In ACL system demonstrations. V. Mnih, C. Szepesv’ari, and J. Audibert. 2008. Empirical berstein stopping. In International Conference on Machine Learning (ICML). R. Nallapati, B. Zhou, C. Gulcehre, B. Xiang, et al. 2016. Abstractive text summarization using sequence-to-sequence rnns and beyond. arXiv preprint arXiv:1602.06023 . T. Nguyen, M. Rosenberg, X. Song, J. Gao, S. Tiwary, R. Majumder, and L. Deng. 2016. MS MARCO: A human generated machine reading comprehension dataset. In Workshop on Cognitive Computing at NIPS. J. Novikova, O. Duek, A. C. Curry, and V. Rieser. 2017. Why we need new evaluation metrics for NLG. In Empirical Methods in Natural Language Processing (EMNLP). M. Pagliardini, P. Gupta, and M. Jaggi. 2017. Unsupervised learning of sentence embeddings using compositional n-gram features. arXiv . J. Paisley, D. M. Blei, and M. I. Jordan. 2012. Variational Bayesian inference with stochastic search. In International Conference on Machine Learning (ICML). pages 1363–1370. K. Papineni, S. Roukos, T. Ward, and W. Zhu. 2002. BLEU: A method for automatic evaluation of machine translation. In Association for Computational Linguistics (ACL). R. J. Passonneau and B. Carpenter. 2014. The benefits of a model of annotation. In Association for Computational Linguistics (ACL). R. Paulus, C. Xiong, and R. Socher. 2018. A deep reinforced model for abstractive summarization. In International Conference on Learning Representations (ICLR). 653 R. Ranganath, S. Gerrish, and D. Blei. 2014. Black box variational inference. In Artificial Intelligence and Statistics (AISTATS). pages 814–822. B. D. Ripley. 2009. Stochastic simulation. John Wiley & Sons. A. See, P. J. Liu, and C. D. Manning. 2017. Get to the point: Summarization with pointer-generator networks. In Association for Computational Linguistics (ACL). M. Snover, B. Dorr, R. Schwartz, L. Micciulla, and J. Makhoul. 2006. A study of translation edit rate with targeted human annotation. In Association for Machine Translation in the Americas. pages 223– 231. C. Tan, F. Wei, N. Yang, W. Lv, and M. Zhou. 2018. S-Net: From answer extraction to answer generation for machine reading comprehension. In Association for the Advancement of Artificial Intelligence (AAAI). R. Vedantam, C. L. Zitnick, and D. Parikh. 2015. CIDEr: Consensus-based image description evaluation. In Computer Vision and Pattern Recognition (CVPR). pages 4566–4575. D. Weissenborn, G. Wiese, and L. Seiffe. 2017. Making neural QA as simple as possible but not simpler. In Computational Natural Language Learning (CoNLL). Y. Wu, M. Schuster, Z. Chen, Q. V. Le, M. Norouzi, W. Macherey, M. Krikun, Y. Cao, Q. Gao, K. Macherey, et al. 2016. Google’s neural machine translation system: Bridging the gap between human and machine translation. arXiv preprint arXiv:1609.08144 .
2018
60
Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Long Papers), pages 654–663 Melbourne, Australia, July 15 - 20, 2018. c⃝2018 Association for Computational Linguistics 654 Neural Document Summarization by Jointly Learning to Score and Select Sentences Qingyu Zhou†∗, Nan Yang‡, Furu Wei‡, Shaohan Huang‡, Ming Zhou‡, Tiejun Zhao† †Harbin Institute of Technology, Harbin, China ‡Microsoft Research, Beijing, China {qyzhou,tjzhao}@hit.edu.cn {nanya,fuwei,shaohanh,mingzhou}@microsoft.com Abstract Sentence scoring and sentence selection are two main steps in extractive document summarization systems. However, previous works treat them as two separated subtasks. In this paper, we present a novel end-to-end neural network framework for extractive document summarization by jointly learning to score and select sentences. It first reads the document sentences with a hierarchical encoder to obtain the representation of sentences. Then it builds the output summary by extracting sentences one by one. Different from previous methods, our approach integrates the selection strategy into the scoring model, which directly predicts the relative importance given previously selected sentences. Experiments on the CNN/Daily Mail dataset show that the proposed framework significantly outperforms the state-of-the-art extractive summarization models. 1 Introduction Traditional approaches to automatic text summarization focus on identifying important content, usually at sentence level (Nenkova and McKeown, 2011). With the identified important sentences, a summarization system can extract them to form an output summary. In recent years, extractive methods for summarization have proven effective in many systems (Carbonell and Goldstein, 1998; Mihalcea and Tarau, 2004; McDonald, 2007; Cao et al., 2015a). In previous works that use extractive methods, text summarization is decomposed into two subtasks, i.e., sentence scoring and sentence selection. ∗Contribution during internship at Microsoft Research. Sentence scoring aims to assign an importance score to each sentence, and has been broadly studied in many previous works. Feature-based methods are popular and have proven effective, such as word probability, TF*IDF weights, sentence position and sentence length features (Luhn, 1958; Hovy and Lin, 1998; Ren et al., 2017). Graph-based methods such as TextRank (Mihalcea and Tarau, 2004) and LexRank (Erkan and Radev, 2004) measure sentence importance using weighted-graphs. In recent years, neural network has also been applied to sentence modeling and scoring (Cao et al., 2015a; Ren et al., 2017). For the second step, sentence selection adopts a particular strategy to choose content sentence by sentence. Maximal Marginal Relevance (Carbonell and Goldstein, 1998) based methods select the sentence that has the maximal score and is minimally redundant with sentences already included in the summary. Integer Linear Programming based methods (McDonald, 2007) treat sentence selection as an optimization problem under some constraints such as summary length. Submodular functions (Lin and Bilmes, 2011) have also been applied to solving the optimization problem of finding the optimal subset of sentences in a document. Ren et al. (2016) train two neural networks with handcrafted features. One is used to rank sentences, and the other one is used to model redundancy during sentence selection. In this paper, we present a neural extractive document summarization (NEUSUM) framework which jointly learns to score and select sentences. Different from previous methods that treat sentence scoring and sentence selection as two tasks, our method integrates the two steps into one endto-end trainable model. Specifically, NEUSUM is a neural network model without any handcrafted features that learns to identify the relative importance of sentences. The relative importance is 655 measured as the gain over previously selected sentences. Therefore, each time the proposed model selects one sentence, it scores the sentences considering both sentence saliency and previously selected sentences. Through the joint learning process, the model learns to predict the relative gain given the sentence extraction state and the partial output summary. The proposed model consists of two parts, i.e., the document encoder and the sentence extractor. The document encoder has a hierarchical architecture, which suits the compositionality of documents. The sentence extractor is built with recurrent neural networks (RNN), which provides two main functionalities. On one hand, the RNN is used to remember the partial output summary by feeding the selected sentence into it. On the other hand, it is used to provide a sentence extraction state that can be used to score sentences with their representations. At each step during extraction, the sentence extractor reads the representation of the last extracted sentence. It then produces a new sentence extraction state and uses it to score the relative importance of the rest sentences. We conduct experiments on the CNN/Daily Mail dataset. The experimental results demonstrate that the proposed NEUSUM by jointly scoring and selecting sentences achieves significant improvements over separated methods. Our contributions are as follows: • We propose a joint sentence scoring and selection model for extractive document summarization. • The proposed model can be end-to-end trained without handcrafted features. • The proposed model significantly outperforms state-of-the-art methods and achieves the best result on CNN/Daily Mail dataset. 2 Related Work Extractive document summarization has been extensively studied for years. As an effective approach, extractive methods are popular and dominate the summarization research. Traditional extractive summarization systems use two key techniques to form the summary, sentence scoring and sentence selection. Sentence scoring is critical since it is used to measure the saliency of a sentence. Sentence selection is based on the scores of sentences to determine which sentence should be extracted, which is usually done heuristically. Many techniques have been proposed to model and score sentences. Unsupervised methods do not require model training or data annotation. In these methods, many surface features are useful, such as term frequency (Luhn, 1958), TF*IDF weights (Erkan and Radev, 2004), sentence length (Cao et al., 2015a) and sentence positions (Ren et al., 2017). These features can be used alone or combined with weights. Graph-based methods (Erkan and Radev, 2004; Mihalcea and Tarau, 2004; Wan and Yang, 2006) are also applied broadly to ranking sentences. In these methods, the input document is represented as a connected graph. The vertices represent the sentences, and the edges between vertices have attached weights that show the similarity of the two sentences. The score of a sentence is the importance of its corresponding vertex, which can be computed using graph algorithms. Machine learning techniques are also widely used for better sentence modeling and importance estimation. Kupiec et al. (1995) use a Naive Bayes classifier to learn feature combinations. Conroy and O’leary (2001) further use a Hidden Markov Model in document summarization. Gillick and Favre (2009) find that using bigram features consistently yields better performance than unigrams or trigrams for ROUGE (Lin, 2004) measures. Carbonell and Goldstein (1998) proposed the Maximal Marginal Relevance (MMR) method as a heuristic in sentence selection. Systems using MMR select the sentence which has the maximal score and is minimally redundant with previous selected sentences. McDonald (2007) treats sentence selection as an optimization problem under some constraints such as summary length. Therefore, he uses Integer Linear Programming (ILP) to solve this optimization problem. Sentence selection can also be seen as finding the optimal subset of sentences in a document. Lin and Bilmes (2011) propose using submodular functions to find the subset. Recently, deep neural networks based approaches have become popular for extractive document summarization. Cao et al. (2015b) develop a novel summary system called PriorSum, which applies enhanced convolutional neural networks to capture the summary prior features derived from length-variable phrases. Ren et al. (2017) use 656 a two-level attention mechanism to measure the contextual relations of sentences. Cheng and Lapata (2016) propose treating document summarization as a sequence labeling task. They first encode the sentences in the document and then classify each sentence into two classes, i.e., extraction or not. Nallapati et al. (2017) propose a system called SummaRuNNer with more features, which also treat extractive document summarization as a sequence labeling task. The two works are both in the separated paradigm, as they first assign a probability of being extracted to each sentence, and then select sentences according to the probability until reaching the length limit. Ren et al. (2016) train two neural networks with handcrafted features. One is used to rank the sentences to select the first sentence, and the other one is used to model the redundancy during sentence selection. However, their model of measuring the redundancy only considers the redundancy between the sentence that has the maximal score, which lacks the modeling of all the selection history. 3 Problem Formulation Extractive document summarization aims to extract informative sentences to represent the important meanings of a document. Given a document D = (S1, S2, . . . , SL) containing L sentences, an extractive summarization system should select a subset of D to form the output summary S = { ˆSi| ˆSi ∈D}. During the training phase, the reference summary S∗and the score of an output summary S under a given evaluation function r(S|S∗) are available. The goal of training is to learn a scoring function f(S) which can be used to find the best summary during testing: arg max S f(S) s.t. S = { ˆSi| ˆSi ∈D} |S| ≤l. where l is length limit of the output summary. In this paper, l is the sentence number limit. Previous state-of-the-art summarization systems search the best solution using the learned scoring function f(·) with two methods, MMR and ILP. In this paper, we adopt the MMR method. Since MMR tries to maximize the relative gain given previous extracted sentences, we let the model to learn to score this gain. Previous works adopt ROUGE recall as the evaluation r(·) considering the DUC tasks have byte length limit for summaries. In this work, we adopt the CNN/Daily Mail dataset to train the neural network model, which does not have this length limit. To prevent the tendency of choosing longer sentences, we use ROUGE F1 as the evaluation function r(·), and set the length limit l as a fixed number of sentences. Therefore, the proposed model is trained to learn a scoring function g(·) of the ROUGE F1 gain, specifically: g(St|St−1) = r (St−1 ∪{St}) −r(St−1) (1) where St−1 is the set of previously selected sentences, and we omit the condition S∗of r(·) for simplicity. At each time t, the summarization system chooses the sentence with maximal ROUGE F1 gain until reaching the sentence number limit. 4 Neural Document Summarization Figure 1 gives the overview of NEUSUM, which consists of a hierarchical document encoder, and a sentence extractor. Considering the intrinsic hierarchy nature of documents, that words form a sentence and sentences form a document, we employ a hierarchical document encoder to reflect this hierarchy structure. The sentence extractor scores the encoded sentences and extracts one of them at each step until reaching the output sentence number limit. In this section, we will first introduce the hierarchical document encoder, and then describe how the model produces summary by joint sentence scoring and selection. 4.1 Document Encoding We employ a hierarchical document encoder to represent the sentences in the input document. We encode the document in two levels, i.e., sentence level encoding and document level encoding. Given a document D = (S1, S2, . . . , SL) containing L sentences. The sentence level encoder reads the j-th input sentence Sj = (x(j) 1 , x(j) 2 , . . . , x(j) nj ) and constructs the basic sentence representation esj. Here we employ a bidirectional GRU (BiGRU) (Cho et al., 2014) as the recurrent unit, where GRU is defined as: zi = σ(Wz[xi, hi−1]) ri = σ(Wr[xi, hi−1]) ehi = tanh(Wh[xi, ri ⊙hi−1]) hi = (1 −zi) ⊙hi−1 + zi ⊙ehi (2) (3) (4) (5) 657 ⃗h(3) 1 x(3) 1 ⃗ h (3) 1 ⃗h(3) 2 x(3) 2 ⃗ h (3) 2 ⃗h(3) 3 x(3) 3 ⃗ h (3) 3 ⃗h(3) 4 x(3) 4 ⃗ h (3) 4 ⃗h(3) 5 x(3) 5 ⃗ h (3) 5 ⃗h(3) 6 x(3) 6 ⃗ h (3) 6 s1 es1 s2 es2 s3 es3 s4 es4 s5 es5 Sentence Level Encoding Document Level Encoding h1 0 h2 s5 h3 s1 Joint Sentence Scoring and Selection arg max = 5 arg max = 1 arg max =? Figure 1: Overview of the NEUSUM model. The model extracts S5 and S1 at the first two steps. At the first step, we feed the model a zero vector 0 to represent empty partial output summary. At the second and third steps, the representations of previously selected sentences S5 and S1, i.e., s5 and s1, are fed into the extractor RNN. At the second step, the model only scores the first 4 sentences since the 5th one is already included in the partial output summary. where Wz, Wr and Wh are weight matrices. The BiGRU consists of a forward GRU and a backward GRU. The forward GRU reads the word embeddings in sentence Sj from left to right and gets a sequence of hidden states, (⃗h(j) 1 ,⃗h(j) 2 , . . . ,⃗h(j) nj ). The backward GRU reads the input sentence embeddings reversely, from right to left, and results in another sequence of hidden states, ( ⃗ h (j) 1 , ⃗ h (j) 2 , . . . , ⃗ h (j) nj ): ⃗h(j) i = GRU(x(j) i ,⃗h(j) i−1) ⃗ h (j) i = GRU(x(j) i , ⃗ h (j) i+1) (6) (7) where the initial states of the BiGRU are set to zero vectors, i.e., ⃗h(j) 1 = 0 and ⃗ h (j) nj = 0. After reading the words of the sentence Sj, we construct its sentence level representation esj by concatenating the last forward and backward GRU hidden vectors: esj = " ⃗ h (j) 1 ⃗h(j) nj # (8) We use another BiGRU as the document level encoder to read the sentences. With the sentence level encoded vectors (es1, es2, . . . , esL) as inputs, the document level encoder does forward and backward GRU encoding and produces two list of hidden vectors: (⃗s1,⃗s2, . . . ,⃗sL) and ( ⃗ s1, ⃗ s2, . . . , ⃗ sL). The document level representation si of sentence Si is the concatenation of the forward and backward hidden vectors: si = ⃗si ⃗ si  (9) We then get the final sentence vectors in the given document: D = (s1, s2, . . . , sL). We use sentence Si and its representative vector si interchangeably in this paper. 4.2 Joint Sentence Scoring and Selection Since the separated sentence scoring and selection cannot utilize the information of each other, the goal of our model is to make them benefit each other. We couple these two steps together so that: a) sentence scoring can be aware of previously selected sentences; b) sentence selection can be simplified since the scoring function is learned to be the ROUGE score gain as described in section 3. Given the last extracted sentence ˆSt−1, the sentence extractor decides the next sentence ˆSt by scoring the remaining document sentences. To score the document sentences considering both their importance and partial output summary, the model should have two key abilities: 1) remembering the information of previous selected sentences; 2) scoring the remaining document sentences based on both the previously selected sentences and the importance of remaining sentences. Therefore, we employ another GRU as the recurrent unit to remember the partial output summary, and use a Multi-Layer Perceptron (MLP) to score 658 the document sentences. Specifically, the GRU takes the document level representation st−1 of the last extracted sentence ˆSt−1 as input to produce its current hidden state ht. The sentence scorer, which is a two-layer MLP, takes two input vectors, namely the current hidden state ht and the sentence representation vector si, to calculate the score δ(Si) of sentence Si. ht = GRU(st−1, ht−1) δ(Si) = Ws tanh (Wqht + Wdsi) (10) (11) where Ws, Wq and Wd are learnable parameters, and we omit the bias parameters for simplicity. When extracting the first sentence, we initialize the GRU hidden state h0 with a linear layer with tanh activation function: h0 = tanh (Wm ⃗ s1 + bm) S0 = ∅ s0 = 0 (12) (13) (14) whereWm and bm are learnable parameters, and ⃗ s1 is the last backward state of the document level encoder BiGRU. Since we do not have any sentences extracted yet, we use a zero vector to represent the previous extracted sentence, i.e., s0 = 0. With the scores of all sentences at time t, we choose the sentence with maximal gain score: ˆSt = arg max Si∈D δ(Si) (15) 4.3 Objective Function Inspired by Inan et al. (2017), we optimize the Kullback-Leibler (KL) divergence of the model prediction P and the labeled training data distribution Q. We normalize the predicted sentence score δ(Si) with softmax function to get the model prediction distribution P: P( ˆSt = Si) = exp (δ(Si)) PL k=1 exp (δ(Sk)) (16) During training, the model is expected to learn the relative ROUGE F1 gain at time step t with previously selected sentences St−1. Considering that the F1 gain value might be negative in the labeled data, we follow previous works (Ren et al., 2017) to use Min-Max Normalization to rescale the gain value to [0, 1]: g(Si) = r(St−1 ∪{Si}) −r(St−1) eg(Si) = g(Si) −min (g(S)) max (g(S)) −min (g(S)) (17) (18) We then apply a softmax operation with temperature τ (Hinton et al., 2015) 1 to produce the labeled data distribution Q as the training target. We apply the temperature τ as a smoothing factor to produce a smoothed label distribution Q: Q(Si) = exp (τeg(Si)) PL k=1 exp (τeg(Sk)) (19) Therefore, we minimize the KL loss function J: J = DKL(P ∥Q) (20) 5 Experiments 5.1 Dataset A large scale dataset is essential for training neural network-based summarization models. We use the CNN/Daily Mail dataset (Hermann et al., 2015; Nallapati et al., 2016) as the training set in our experiments. The CNN/Daily Mail news contain articles and their corresponding highlights. The highlights are created by human editors and are abstractive summaries. Therefore, the highlights are not ready for training extractive systems due to the lack of supervisions. We create an extractive summarization training set based on CNN/Daily Mail corpus. To determine the sentences to be extracted, we design a rule-based system to label the sentences in a given document similar to Nallapati et al. (2017). Specifically, we construct training data by maximizing the ROUGE-2 F1 score. Since it is computationally expensive to find the global optimal combination of sentences, we employ a greedy approach. Given a document with n sentences, we enumerate the candidates from 1-combination n 1  to n-combination n n  . We stop searching if the highest ROUGE-2 F1 score in n k  is less than the best one in n k−1  . Table 1 shows the data statistics of the CNN/Daily Mail dataset. We conduct data preprocessing using the same method2 in See et al. (2017), including sentence splitting and word tokenization. Both Nallapati et al. (2016, 2017) use the anonymized version of the data, where the named entities are replaced by identifiers such as entity4. Following See et al. (2017), we use the non-anonymized version so we can directly operate on the original text. 1We set τ = 20 empirically according to the model performance on the development set. 2https://github.com/abisee/cnn-dailymail 659 CNN/Daily Mail Training Dev Test #(Document) 287,227 13,368 11,490 #(Ref / Document) 1 1 1 Doc Len (Sentence) 31.58 26.72 27.05 Doc Len (Word) 791.36 769.26 778.24 Ref Len (Sentence) 3.79 4.11 3.88 Ref Len (Word) 55.17 61.43 58.31 Table 1: Data statistics of CNN/Daily Mail dataset. 5.2 Implementation Details Model Parameters The vocabulary is collected from the CNN/Daily Mail training data. We lowercase the text and there are 732,304 unique word types. We use the top 100,000 words as the model vocabulary since they can cover 98.23% of the training data. The size of word embedding, sentence level encoder GRU, document level encoder GRU are set to 50, 256, and 256 respectively. We set the sentence extractor GRU hidden size to 256. Model Training We initialize the model parameters randomly using a Gaussian distribution with Xavier scheme (Glorot and Bengio, 2010). The word embedding matrix is initialized using pretrained 50-dimension GloVe vectors (Pennington et al., 2014)3. We found that larger size GloVe does not lead to improvement. Therefore, we use 50-dim word embeddings for fast training. The pre-trained GloVe vectors contain 400,000 words and cover 90.39% of our model vocabulary. We initialize the rest of the word embeddings randomly using a Gaussian distribution with Xavier scheme. The word embedding matrix is not updated during training. We use Adam (Kingma and Ba, 2015) as our optimizing algorithm. For the hyperparameters of Adam optimizer, we set the learning rate α = 0.001, two momentum parameters β1 = 0.9 and β2 = 0.999 respectively, and ϵ = 10−8. We also apply gradient clipping (Pascanu et al., 2013) with range [−5, 5] during training. We use dropout (Srivastava et al., 2014) as regularization with probability p = 0.3 after the sentence level encoder and p = 0.2 after the document level encoder. We truncate each article to 80 sentences and each sentence to 100 words during both training and testing. The model is implemented with PyTorch (Paszke et al., 2017). We 3https://nlp.stanford.edu/projects/ glove/ release the source code and related resources at https://res.qyzhou.me. Model Testing At test time, considering that LEAD3 is a commonly used and strong extractive baseline, we make NEUSUM and the baselines extract 3 sentences to make them all comparable. 5.3 Baseline We compare NEUSUM model with the following state-of-the-art baselines: LEAD3 The commonly used baseline by selecting the first three sentences as the summary. TEXTRANK An unsupervised algorithm based on weighted-graphs proposed by Mihalcea and Tarau (2004). We use the implementation in Gensim ( ˇReh˚uˇrek and Sojka, 2010). CRSUM Ren et al. (2017) propose an extractive summarization system which considers the contextual information of a sentence. We train this baseline model with the same training data as our approach. NN-SE Cheng and Lapata (2016) propose an extractive system which models document summarization as a sequence labeling task. We train this baseline model with the same training data as our approach. SUMMARUNNER Nallapati et al. (2017) propose to add some interpretable features such as sentence absolute and relative positions. PGN Pointer-Generator Network (PGN). A stateof-the-art abstractive document summarization system proposed by See et al. (2017), which incorporates copying and coverage mechanisms. 5.4 Evaluation Metric We employ ROUGE (Lin, 2004) as our evaluation metric. ROUGE measures the quality of summary by computing overlapping lexical units, such as unigram, bigram, trigram, and longest common subsequence (LCS). It has become the standard evaluation metric for DUC shared tasks and popular for summarization evaluation. Following previous work, we use ROUGE-1 (unigram), ROUGE2 (bigram) and ROUGE-L (LCS) as the evaluation metrics in the reported experimental results. 660 5.5 Results We use the official ROUGE script4 (version 1.5.5) to evaluate the summarization output. Table 2 summarizes the results on CNN/Daily Mail data set using full length ROUGE-F15 evaluation. It includes two unsupervised baselines, LEAD3 and TEXTRANK. The table also includes three stateof-the-art neural network based extractive models, i.e., CRSUM, NN-SE and SUMMARUNNER. In addition, we report the state-of-the-art abstractive PGN model. The result of SUMMARUNNER is on the anonymized dataset and not strictly comparable to our results on the non-anonymized version dataset. Therefore, we also include the result of LEAD3 on the anonymized dataset as a reference. Models ROUGE-1 ROUGE-2 ROUGE-L LEAD3 40.2417.7036.45TEXTRANK 40.2017.5636.44CRSUM 40.5218.0836.81NN-SE 41.1318.5937.40PGN‡ 39.5317.2836.38LEAD3‡ * 39.2 15.7 35.5 SUMMARUNNER‡ * 39.6 16.2 35.3 NEUSUM 41.59 19.01 37.98 Table 2: Full length ROUGE F1 evaluation (%) on CNN/Daily Mail test set. Results with ‡ mark are taken from the corresponding papers. Those marked with * were trained and evaluated on the anonymized dataset, and so are not strictly comparable to our results on the original text. All our ROUGE scores have a 95% confidence interval of at most ±0.22 as reported by the official ROUGE script. The improvement is statistically significant with respect to the results with superscript - mark. NEUSUM achieves 19.01 ROUGE-2 F1 score on the CNN/Daily Mail dataset. Compared to the unsupervised baseline methods, NEUSUM performs better by a large margin. In terms of ROUGE2 F1, NEUSUM outperforms the strong baseline LEAD3 by 1.31 points. NEUSUM also outperforms the neural network based models. Compared to the state-of-the-art extractive model NNSE (Cheng and Lapata, 2016), NEUSUM performs significantly better in terms of ROUGE-1, ROUGE2 and ROUGE-L F1 scores. Shallow features, such 4http://www.berouge.com/ 5The ROUGE evaluation option is, -m -n 2 as sentence position, have proven effective in document summarization (Ren et al., 2017; Nallapati et al., 2017). Without any hand-crafted features, NEUSUM performs better than the CRSUM and SUMMARUNNER baseline models with features. As given by the 95% confidence interval in the official ROUGE script, our model achieves statistically significant improvements over all the baseline models. To the best of our knowledge, the proposed NEUSUM model achieves the best results on the CNN/Daily Mail dataset. Models Info Rdnd Overall NN-SE 1.36 1.29 1.39 NEUSUM 1.33 1.21 1.34 Table 3: Rankings of NEUSUM and NN-SE in terms of informativeness (Info), redundancy (Rdnd) and overall quality by human participants (lower is better). We also provide human evaluation results on a sample of test set. We random sample 50 documents and ask three volunteers to evaluate the output of NEUSUM and the NN-SE baseline models. They are asked to rank the output summaries from best to worst (with ties allowed) regarding informativeness, redundancy and overall quality. Table 3 shows the human evaluation results. NEUSUM performs better than the NN-SE baseline on all three aspects, especially in redundancy. This indicates that by jointly scoring and selecting sentences, NEUSUM can produce summary with less content overlap since it re-estimates the saliency of remaining sentences considering both their contents and previously selected sentences. 6 Discussion 6.1 Precision at Step-t We analyze the accuracy of sentence selection at each step. Since we extract 3 sentences at test time, we show how NEUSUM performs when extracting each sentence. Given a document D in test set T, NEUSUM predicted summary S, its reference summary S∗, and the extractive oracle summary O with respect to D and S∗(we use the method described in section 5.1 to construct O), we define the precision at step t as p(@t): p(@t) = 1 |T| X D∈T 1O(S[t]) (21) 661 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 0 10 20 30 % of sentence position NN-SE NeuSum oracle Figure 2: Position distribution of selected sentences of the NN-SE baseline, our NEUSUM model and oracle on the test set. We only draw the first 30 sentences since the average document length is 27.05. where S[t] is the sentence extracted at step t, and 1O is the indicator function defined as: 1O(x) = ( 1 if x ∈O 0 if x /∈O (22) p(@1) p(@2) p(@3) 0.20 0.25 0.30 0.35 0.40 0.45 Precision NN-SE NeuSum Figure 3: Precision of extracted sentence at step t of the NN-SE baseline and the NEUSUM model. Figure 3 shows the precision at step t of NN-SE baseline and our NEUSUM. It can be observed that NEUSUM achieves better precision than the NNSE baseline at each step. For the first sentence, both NEUSUM and NN-SE achieves good performance. The NN-SE baseline has 39.18% precision at the first step, and NEUSUM outperforms it by 1.2 points. At the second step, NEUSUM outperforms NN-SE by a large margin. In this step, the NEUSUM model extracts 31.52% sentences correctly, which is 3.24 percent higher than 28.28% of NN-SE. We think the second step selection benefits from the first step in NEUSUM since it can remember the selection history, while the separated models lack this ability. However, we can notice the trend that the precision drops fast after each selection. We think this is due to two main reasons. First, we think that the error propagation leads to worse selection for the third selection. As shown in Figure 2, the p(@1) and p(@2) are 40.38% and 31.52% respectively, so the history is less reliable for the third selection. Second, intuitively, we think the later selections are more difficult compared to the previous ones since the most important sentences are already selected. 6.2 Position of Selected Sentences Early works (Ren et al., 2017; Nallapati et al., 2017) have shown that sentence position is an important feature in extractive document summarization. Figure 2 shows the position distributions of the NN-SE baseline, our NEUSUM model and oracle on the CNN/Daily Mail test set. It can be seen that the NN-SE baseline model tends to extract large amount of leading sentences, especially the leading three sentences. According to the statistics, about 80.91% sentences selected by NN-SE baseline are in leading three sentences. In the meanwhile, our NEUSUM model selects 58.64% leading three sentences. We can notice that in the oracle, the percentage of selecting leading sentences (sentence 1 to 5) is moderate, which is around 10%. Compared to NN-SE, the position of selected sentences in NEUSUM is closer to the oracle. Although NEUSUM also extracts more leading sentences than the oracle, it selects more tailing ones. For example, our NEUSUM model extracts more than 30% of sentences in the range of sentence 4 to 6. In the range of sentence 7 to 13, NN-SE barely extracts any sentences, but our NEUSUM model still extract sentences in this range. Therefore, we think this is one of the reasons why NEUSUM performs better than NN-SE. We analyze the sentence position distribution and offer an explanation for these observations. 662 Intuitively, leading sentences are important for a well-organized article, especially for newswire articles. It is also well known that LEAD3 is a very strong baseline. In the training data, we found that 50.98% sentences labeled as “should be extracted” belongs to the first 5 sentences, which may cause the trained model tends to select more leading sentences. One possible situation is that one sentence in the tail of a document is more important than the leading sentences, but the margin between them is not large enough. The models which separately score and select sentences might not select sentences in the tail whose scores are not higher than the leading ones. These methods may choose the safer leading sentences as a fallback in such confusing situation because there is no direct competition between the leading and tailing candidates. In our NEUSUM model, the scoring and selection are jointly learned, and at each step the tailing candidates can compete directly with the leading ones. Therefore, NEUSUM can be more discriminating when dealing with this situation. 7 Conclusion Conventional approaches to extractive document summarization contain two separated steps: sentence scoring and sentence selection. In this paper, we present a novel neural network framework for extractive document summarization by jointly learning to score and select sentences to address this issue. The most distinguishing feature of our approach from previous methods is that it combines sentence scoring and selection into one phase. Every time it selects a sentence, it scores the sentences according to the partial output summary and current extraction state. ROUGE evaluation results show that the proposed joint sentence scoring and selection approach significantly outperforms previous separated methods. Acknowledgments We thank three anonymous reviewers for their helpful comments. We also thank Danqing Huang, Chuanqi Tan, Zhirui Zhang, Shuangzhi Wu and Wei Jia for helpful discussions. The work of this paper is funded by the project of National Key Research and Development Program of China (No. 2017YFB1002102) and the project of National Natural Science Foundation of China (No. 91520204). The first author is funded by the Harbin Institute of Technology Scholarship Fund. References Ziqiang Cao, Furu Wei, Li Dong, Sujian Li, and Ming Zhou. 2015a. Ranking with recursive neural networks and its application to multi-document summarization. In AAAI, pages 2153–2159. Ziqiang Cao, Furu Wei, Sujian Li, Wenjie Li, Ming Zhou, and WANG Houfeng. 2015b. Learning summary prior representation for extractive summarization. In Proceedings of the 53rd Annual Meeting of the Association for Computational Linguistics and the 7th International Joint Conference on Natural Language Processing (Volume 2: Short Papers), volume 2, pages 829–833. Jaime Carbonell and Jade Goldstein. 1998. The use of mmr, diversity-based reranking for reordering documents and producing summaries. In Proceedings of the 21st annual international ACM SIGIR conference on Research and development in information retrieval, pages 335–336. ACM. Jianpeng Cheng and Mirella Lapata. 2016. Neural summarization by extracting sentences and words. In Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 484–494, Berlin, Germany. Association for Computational Linguistics. Kyunghyun Cho, Bart van Merrienboer, Caglar Gulcehre, Dzmitry Bahdanau, Fethi Bougares, Holger Schwenk, and Yoshua Bengio. 2014. Learning phrase representations using rnn encoder–decoder for statistical machine translation. In Proceedings of EMNLP 2014, pages 1724–1734, Doha, Qatar. Association for Computational Linguistics. John M Conroy and Dianne P O’leary. 2001. Text summarization via hidden markov models. In Proceedings of the 24th annual international ACM SIGIR conference on Research and development in information retrieval, pages 406–407. ACM. G¨unes Erkan and Dragomir R Radev. 2004. Lexrank: Graph-based lexical centrality as salience in text summarization. Journal of Artificial Intelligence Research, 22:457–479. Dan Gillick and Benoit Favre. 2009. A scalable global model for summarization. In Proceedings of the Workshop on Integer Linear Programming for Natural Langauge Processing, pages 10–18. Association for Computational Linguistics. Xavier Glorot and Yoshua Bengio. 2010. Understanding the difficulty of training deep feedforward neural networks. In Aistats, volume 9, pages 249–256. Karl Moritz Hermann, Tomas Kocisky, Edward Grefenstette, Lasse Espeholt, Will Kay, Mustafa Suleyman, and Phil Blunsom. 2015. Teaching machines to read and comprehend. In Advances in Neural Information Processing Systems, pages 1693– 1701. 663 Geoffrey Hinton, Oriol Vinyals, and Jeff Dean. 2015. Distilling the knowledge in a neural network. arXiv preprint arXiv:1503.02531. Eduard Hovy and Chin-Yew Lin. 1998. Automated text summarization and the summarist system. In Proceedings of a workshop on held at Baltimore, Maryland: October 13-15, 1998, pages 197–214. Association for Computational Linguistics. Hakan Inan, Khashayar Khosravi, and Richard Socher. 2017. Tying word vectors and word classifiers: A loss framework for language modeling. In Proceedings of 5th International Conference for Learning Representations. Diederik Kingma and Jimmy Ba. 2015. Adam: A method for stochastic optimization. In Proceedings of 3rd International Conference for Learning Representations, San Diego. Julian Kupiec, Jan Pedersen, and Francine Chen. 1995. A trainable document summarizer. In Proceedings of the 18th annual international ACM SIGIR conference on Research and development in information retrieval, pages 68–73. ACM. Chin-Yew Lin. 2004. Rouge: A package for automatic evaluation of summaries. In Text summarization branches out: Proceedings of the ACL-04 workshop, volume 8. Barcelona, Spain. Hui Lin and Jeff Bilmes. 2011. A class of submodular functions for document summarization. In Proceedings of the 49th Annual Meeting of the Association for Computational Linguistics: Human Language Technologies-Volume 1, pages 510–520. Association for Computational Linguistics. Hans Peter Luhn. 1958. The automatic creation of literature abstracts. IBM Journal of research and development, 2(2):159–165. Ryan McDonald. 2007. A study of global inference algorithms in multi-document summarization. In European Conference on Information Retrieval, pages 557–564. Springer. Rada Mihalcea and Paul Tarau. 2004. Textrank: Bringing order into text. In Proceedings of the 2004 conference on empirical methods in natural language processing. Ramesh Nallapati, Feifei Zhai, and Bowen Zhou. 2017. Summarunner: A recurrent neural network based sequence model for extractive summarization of documents. In AAAI, pages 3075–3081. Ramesh Nallapati, Bowen Zhou, C¸ a glar Gulc¸ehre, and Bing Xiang. 2016. Abstractive text summarization using sequence-to-sequence rnns and beyond. In Proceedings of The 20th SIGNLL Conference on Computational Natural Language Learning. Ani Nenkova and Kathleen McKeown. 2011. Automatic summarization. Foundations and Trends R⃝in Information Retrieval, 5(2–3):103–233. Razvan Pascanu, Tomas Mikolov, and Yoshua Bengio. 2013. On the difficulty of training recurrent neural networks. ICML (3), 28:1310–1318. Adam Paszke, Sam Gross, Soumith Chintala, Gregory Chanan, Edward Yang, Zachary DeVito, Zeming Lin, Alban Desmaison, Luca Antiga, and Adam Lerer. 2017. Automatic differentiation in pytorch. Jeffrey Pennington, Richard Socher, and Christopher D. Manning. 2014. Glove: Global vectors for word representation. In Empirical Methods in Natural Language Processing (EMNLP), pages 1532– 1543. Radim ˇReh˚uˇrek and Petr Sojka. 2010. Software Framework for Topic Modelling with Large Corpora. In Proceedings of the LREC 2010 Workshop on New Challenges for NLP Frameworks, pages 45–50, Valletta, Malta. ELRA. Pengjie Ren, Zhumin Chen, Zhaochun Ren, Furu Wei, Jun Ma, and Maarten de Rijke. 2017. Leveraging contextual sentence relations for extractive summarization using a neural attention model. In Proceedings of the 40th International ACM SIGIR Conference on Research and Development in Information Retrieval, pages 95–104, New York, NY, USA. ACM. Pengjie Ren, Furu Wei, CHEN Zhumin, MA Jun, and Ming Zhou. 2016. A redundancy-aware sentence regression framework for extractive summarization. In Proceedings of COLING 2016, the 26th International Conference on Computational Linguistics: Technical Papers, pages 33–43. Abigail See, Peter J. Liu, and Christopher D. Manning. 2017. Get to the point: Summarization with pointergenerator networks. In Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 1073– 1083, Vancouver, Canada. Association for Computational Linguistics. Nitish Srivastava, Geoffrey E Hinton, Alex Krizhevsky, Ilya Sutskever, and Ruslan Salakhutdinov. 2014. Dropout: a simple way to prevent neural networks from overfitting. Journal of Machine Learning Research, 15(1):1929–1958. Xiaojun Wan and Jianwu Yang. 2006. Improved affinity graph based multi-document summarization. In Proceedings of the human language technology conference of the NAACL, Companion volume: Short papers, pages 181–184. Association for Computational Linguistics.
2018
61
Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Long Papers), pages 664–674 Melbourne, Australia, July 15 - 20, 2018. c⃝2018 Association for Computational Linguistics 664 Unsupervised Abstractive Meeting Summarization with Multi-Sentence Compression and Budgeted Submodular Maximization Guokan Shang1,2, Wensi Ding1∗, Zekun Zhang1∗, Antoine J.-P. Tixier1, Polykarpos Meladianos1,3, Michalis Vazirgiannis1,3, Jean-Pierre Lorr´e2 1 ´Ecole Polytechnique, 2Linagora, 3AUEB Abstract We introduce a novel graph-based framework for abstractive meeting speech summarization that is fully unsupervised and does not rely on any annotations. Our work combines the strengths of multiple recent approaches while addressing their weaknesses. Moreover, we leverage recent advances in word embeddings and graph degeneracy applied to NLP to take exterior semantic knowledge into account, and to design custom diversity and informativeness measures. Experiments on the AMI and ICSI corpus show that our system improves on the state-of-the-art. Code and data are publicly available1, and our system can be interactively tested2. 1 Introduction People spend a lot of their time in meetings. The ubiquity of web-based meeting tools and the rapid improvement and adoption of Automatic Speech Recognition (ASR) is creating pressing needs for effective meeting speech summarization mechanisms. Spontaneous multi-party meeting speech transcriptions widely differ from traditional documents. Instead of grammatical, well-segmented sentences, the input is made of often ill-formed and ungrammatical text fragments called utterances. On top of that, ASR transcription and segmentation errors inject additional noise into the input. In this paper, we combine the strengths of 6 approaches that had previously been applied ∗Work done as part of 3rd year project, with equal contribution. 1https://bitbucket.org/dascim/acl2018_abssumm 2http://datascience.open-paas.org/abs_summ_app to 3 different tasks (keyword extraction, multisentence compression, and summarization) into a unified, fully unsupervised end-to-end meeting speech summarization framework that can generate readable summaries despite the noise inherent to ASR transcriptions. We also introduce some novel components. Our method reaches state-ofthe-art performance and can be applied to languages other than English in an almost out-of-thebox fashion. 2 Framework Overview As illustrated in Figure 1, our system is made of 4 modules, briefly described in what follows. 1. Text Preprocessing 1. Text Preprocessing 2. Utterance Community Detection 2. Utterance Community Detection 3. Multi-Sentence Compression 3. Multi-Sentence Compression Word Graph Building Word Graph Building Transcription Transcription Path Selection & Reranking Path Selection & Reranking Edge Weight Assignment Edge Weight Assignment 4. Budgeted Submodular Maximization 4. Budgeted Submodular Maximization Summary Summary Automatic Speech Recognition Automatic Speech Recognition Figure 1: Overarching system pipeline. The first module pre-processes text. The goal of the second Community Detection step is to group together the utterances that should be summarized by a common abstractive sentence (Murray et al., 2012). These utterances typically correspond to a topic or subtopic discussed during the meeting. A single abstractive sentence is then separately generated for each community, using an extension of the Multi-Sentence Compression Graph (MSCG) of Filippova (2010). Finally, we generate a summary by selecting the best elements from the set of abstractive sentences under a budget constraint. We cast this problem as the maximization of a custom submodular quality function. 665 Note that our approach is fully unsupervised and does not rely on any annotations. Our input simply consists in a list of utterances without any metadata. All we need in addition to that is a part-of-speech tagger, a language model, a set of pre-trained word vectors, a list of stopwords and fillerwords, and optionally, access to a lexical database such as WordNet. Our system can work out-of-the-box in most languages for which such resources are available. 3 Related Work and Contributions As detailed below, our framework combines the strengths of 6 recent works. It also includes novel components. 3.1 Multi-Sentence Compression Graph (MSCG) (Filippova, 2010) Description: a fully unsupervised, simple approach for generating a short, self-sufficient sentence from a cluster of related, overlapping sentences. As shown in Figure 5, a word graph is constructed with special edge weights, the K-shortest weighted paths are then found and re-ranked with a scoring function, and the best path is used as the compression. The assumption is that redundancy alone is enough to ensure informativeness and grammaticality. Limitations: despite making great strides and showing promising results, Filippova (2010) reported that 48% and 36% of the generated sentences were missing important information and were not perfectly grammatical. Contributions: to respectively improve informativeness and grammaticality, we combine ideas found in Boudin and Morin (2013) and Mehdad et al. (2013), as described next. 3.2 More informative MSCG (Boudin and Morin, 2013) Description: same task and approach as in Filippova (2010), except that a word co-occurrence network is built from the cluster of sentences, and that the PageRank scores of the nodes are computed in the manner of Mihalcea and Tarau (2004). The scores are then injected into the path re-ranking function to favor informative paths. Limitations: PageRank is not state-of-the-art in capturing the importance of words in a document. Grammaticality is not considered. Contributions: we take grammaticality into account as explained in subsection 3.4. We also follow recent evidence (Tixier et al., 2016a) that spreading influence, as captured by graph degeneracy-based measures, is better correlated with “keywordedness” than PageRank scores, as explained in the next subsection. 3.3 Graph-based word importance scoring (Tixier et al., 2016a) Word co-occurrence network. As shown in Figure 2, we consider a word co-occurrence network as an undirected, weighted graph constructed by sliding a fixed-size window over text, and where edge weights represent co-occurrence counts (Tixier et al., 2016b; Mihalcea and Tarau, 2004). ● ● ● ● ● ● ● categori tend doubt bit big peopl remot design general fli featur button ti CoreRank numbers 34 36 40 41 45 46 70 Edge weights 1 2 5 6 Figure 2: Word co-occurrence graph example, for the input text shown in Figure 5. Important words are influential nodes. In social networks, it was shown that influential spreaders, that is, those individuals that can reach the largest part of the network in a given number of steps, are better identified via their core numbers rather than via their PageRank scores or degrees (Kitsak et al., 2010). See Figure 3 for the intuition. Similarly, in NLP, Tixier et al. (2016a) have shown that keywords are better identified via their core numbers rather than via their TextRank scores, that is, keywords are influencers within their word cooccurrence network. Graph degeneracy (Seidman, 1983). Let G(V, E) be an undirected, weighted graph with n = |V | nodes and m = |E| edges. A k-core of G is a maximal subgraph of G in which every vertex v has at least weighted degree k. As shown in Figures 3 and 4, the k-core decomposition of G forms a hierarchy of nested subgraphs whose cohesiveness and size respectively increase and decrease with k. The higher-level cores can be viewed as a filtered version of the graph that 666 excludes noise. This property is highly valuable when dealing with graphs constructed from noisy text, like utterances. The core number of a node is the highest order of a core that contains this node. Figure 3: k-core decomposition. The blue and the yellow nodes have same degree and similar PageRank numbers. However, the blue node is a much more influential spreader as it is strategically placed in the core of the network, as captured by its higher core number. The CoreRank number of a node (Tixier et al., 2016a; Bae and Kim, 2014) is defined as the sum of the core numbers of its neighbors. As shown in Figure 4, CoreRank more finely captures the structural position of each node in the graph than raw core numbers. Also, stabilizing scores across node neighborhoods enhances the inherent noise robustness property of graph degeneracy, which is desirable when working with noisy speech-to-text output. 3-core 2-core 1-core Core number Core number Core number c = 1 c = 2 c = 3 * ** Figure 4: Value added by CoreRank: while nodes ⋆and ⋆⋆ have the same core number (=2), node ⋆has a greater CoreRank score (3+2+2=7 vs 2+2+1=5), which better reflects its more central position in the graph. Time complexity. Building a graph-of-words is O(nW), and computing the weighted k-core decomposition of a graph requires O(m log(n)) (Batagelj and Zaverˇsnik, 2002). For small pieces of text, this two step process is so affordable that it can be used in real-time (Meladianos et al., 2017). Finally, computing CoreRank scores can be done with only a small overhead of O(n), provided that the graph is stored as a hash of adjacency lists. Getting the CoreRank numbers from scratch for a community of utterances is therefore very fast, especially since typically in this context, n ∼10 and m ∼100. 3.4 Fluency-aware, more abstractive MSCG (Mehdad et al., 2013) Description: a supervised end-to-end framework for abstractive meeting summarization. Community Detection is performed by (1) building an utterance graph with a logistic regression classifier, and (2) applying the CONGA algorithm. Then, before performing sentence compression with the MSCG, the authors also (3) build an entailment graph with a SVM classifier in order to eliminate redundant and less informative utterances. In addition, the authors propose the use of WordNet (Miller, 1995) during the MSCG building phase to capture lexical knowledge between words and thus generate more abstractive compressions, and of a language model when re-ranking the shortest paths, to favor fluent compressions. Limitations: this effort was a significant advance, as it was the first application of the MSCG to the meeting summarization task, to the best of our knowledge. However, steps (1) and (3) above are complex, based on handcrafted features, and respectively require annotated training data in the form of links between human-written abstractive sentences and original utterances and multiple external datasets (e.g., from the Recognizing Textual Entailment Challenge). Such annotations are costly to obtain and very seldom available in practice. Contributions: while we retain the use of WordNet and of a language model, we show that, without deteriorating the quality of the results, steps (1) and (2) above (Community Detection) can be performed in a much more simple, completely unsupervised way, and that step (3) can be removed. That is, the MSCG is powerful enough to remove redundancy and ensure informativeness, should proper edge weights and path re-ranking function be used. In addition to the aforementioned contributions, we also introduce the following novel components into our abstractive summarization pipeline: • we inject global exterior knowledge into the edge weights of the MSCG, by using the Word Attraction Force of Wang et al. (2014), based on 667 distance in the word embedding space, • we add a diversity term to the path re-ranking function, that measures how many unique clusters in the embedding space are visited by each path, • rather than using all the abstractive sentences as the final summary like in Mehdad et al. (2013), we maximize a custom submodular function to select a subset of abstractive sentences that is nearoptimal given a budget constraint (summary size). A brief background of submodularity in the context of summarization is provided next. 3.5 Submodularity for summarization (Lin and Bilmes, 2010; Lin, 2012) Selecting an optimal subset of abstractive sentences from a larger set can be framed as a budgeted submodular maximization task: argmax S⊆S f(S)| X s∈S cs ≤B (1) where S is a summary, cs is the cost (word count) of sentence s, B is the desired summary size in words (budget), and f is a summary quality scoring set function, which assigns a single numeric score to a summary S. This combinatorial optimization task is NPhard. However, near-optimal performance can be guaranteed with a modified greedy algorithm (Lin and Bilmes, 2010) that iteratively selects the sentence s that maximizes the ratio of quality function gain to scaled cost f(S∪s)−f(S)/cr s (where S is the current summary and r ≥0 is a scaling factor). In order for the performance guarantees to hold however, f has to be submodular and monotone non-decreasing. Our proposed f is described in subsection 4.4. 4 Our Framework We detail next each of the four modules in our architecture (shown in Figure 1). 4.1 Text preprocessing We adopt preprocessing steps tailored to the characteristics of ASR transcriptions. Consecutive repeated unigrams and bigrams are reduced to single terms. Specific ASR tags, such as {vocalsound}, {pause}, and {gap} are filtered out. In addition, filler words, such as uh-huh, okay, well, and by the way are also discarded. Consecutive stopwords at the beginning and end of utterances are stripped. In the end, utterances that contain less than 3 nonstopwords are pruned out. The surviving utterances are used for the next steps. 4.2 Utterance community detection The goal here is to cluster utterances into communities that should be summarized by a common abstractive sentence. We initially experimented with techniques capitalizing on word vectors, such as k-means and hierarchical clustering based on the Euclidean distance or the Word Mover’s Distance (Kusner et al., 2015). We also tried graph-based approaches, such as community detection in a complete graph where nodes are utterances and edges are weighted based on the aforementioned distances. Best results were obtained, however, with a simple approach in which utterances are projected into the vector space and assigned standard TFIDF weights. Then, the dimensionality of the utterance-term matrix is reduced with Latent Semantic Analysis (LSA), and finally, the k-means algorithm is applied. Note that LSA is only used here, during the utterance community detection phase, to remove noise and stabilize clustering. We do not use a topic graph in our approach. We think using word embeddings was not effective, because in meeting speech, as opposed to traditional documents, participants tend to use the same term to refer to the same thing throughout the entire conversation, as noted by Riedhammer et al. (2010), and as verified in practice. This is probably why, for clustering utterances, capturing synonymy is counterproductive, as it artificially reduces the distance between every pair of utterances and blurs the picture. 4.3 Multi-Sentence Compression The following steps are performed separately for each community. Word importance scoring From a processed version of the community (stemming and stopword removal), we construct an undirected, weighted word co-occurrence network as described in subsection 3.3. We use a sliding window of size W = 6 not overspanning utterances. Note that stemming is performed only here, and for the sole purpose of building the word cooccurrence network. We then compute the CoreRank numbers of the nodes as described in subsection 3.3. 668 Figure 5: Compressed sentence (in bold red) generated by our multi-sentence compression graph (MSCG) for a 3-utterance community from meeting IS1009b of the AMI corpus. Using Filippova (2010)’s weighting and re-ranking scheme here would have selected another path: design different remotes for different people bit of it’s from their tend to for ti. Note that the compressed sentence does not appear in the initial set of utterances, and is compact and grammatical, despite the redundancy, transcription and segmentation errors of the input. The abstractive and robust nature of the MSCG makes it particularly well-suited to the meeting domain. buttons for is a different big from like for three we be people doubt to ti if their are it different which for of we people having that design each remote will for different that because of designing all the remotes bit mean can generally to tend three for its categories different START the be need with of features like flies END generally we can design a remote which is mean need for people bit of it's from their tend to for ti design different remotes for different people like for each to be the that will be big buttons doubt like with it because flies that if we design of remote having all the different features for different people are designing three different remotes for three different categories of people We finally reweigh the CoreRank scores, indicative of word importance within a given community, with a quantity akin to an Inverse Document Frequency, where communities serve as documents and the full meeting as the collection. We thus obtain something equivalent to the TW-IDF weighting scheme of Rousseau and Vazirgiannis (2013), where the CoreRank scores are the term weights TW: TW-IDF(t, d, D) = TW(t, d) × IDF(t, D) (2) where t is a term belonging to community d, and D is the set of all utterance communities. We compute the IDF as IDF(t, D) = 1 + log|D|/Dt, where |D| is the number of communities and Dt the number of communities containing t. The intuition behind this reweighing scheme is that a term should be considered important within a given meeting if it has a high CoreRank score within its community and if the number of communities in which the term appears is relatively small. Word graph building The backbone of the graph is laid out as a directed sequence of nodes corresponding to the words in the first utterance, with special START and END nodes at the beginning and at the end (see Figure 5). Edge direction follows the natural flow of text. Words from the remaining utterances are then iteratively added to the graph (between the START and END nodes) based on the following rules: 1) if the word is a non-stopword, the word is mapped onto an existing node if it has the same lowercased form and the same part-of-speech tag3. In case of multiple matches, we check the immediate context (the preceding and following words in the utterance and the neighboring nodes in the graph), and we pick the node with the largest context overlap or which has the greatest number of words already mapped to it (when no overlap). When there is no match, we use WordNet as described in Appendix A. 2) if the word is a stopword and there is a match, it is mapped only if there is an overlap of at least one non-stopword in the immediate context. Otherwise, a new node is created. Finally, note that any two words appearing within the same utterance cannot be mapped to the same node. This ensures that every utterance is a loopless path in the graph. Of course, there are many more paths in the graphs than original utterances. Edge Weight Assignment Once the word graph is constructed, we assign weights to its edges as: w′′′(pi, pj) = w′(pi, pj) w′′(pi, pj) (3) where pi and pj are two neighbors in the MSCG. As detailed next, those weights combine local cooccurrence statistics (numerator) with global exterior knowledge (denominator). Note that the lower 3We used NLTK’s averaged perceptron tagger, available at: http://www.nltk. org/api/nltk.tag.html#module-nltk.tag.perceptron 669 Figure 6: t-SNE visualization (Maaten and Hinton, 2008) of the Google News vectors of the words in the utterance community shown in Figure 5. Arrows join the words in the best compression path shown in Figure 5. Movements in the embedding space, as measured by the number of unique clusters covered by the path (here, 6/11), provide a sense of the diversity of the compressed sentence, as formalized in Equation 10. 100 0 100 200 300 200 100 0 100 200 300 different for people design of to three be that remotes remote like we the flies because features is it are need if from it's generally tend buttons their doubt which ti all big designing bit with categories a will can each having mean the weight of an edge, the better. Local co-occurrence statistics. We use Filippova (2010)’s formula: w′(pi, pj) = f(pi) + f(pj) P P∈G′,pi,pj∈P diff(P, pi, pj)−1 (4) where f(pi) is the number of words mapped to node pi in the MSCG G′, and diff(P, pi, pj)−1 is the inverse of the distance between pi and pj in a path P (in number of hops). This weighting function favors edges between infrequent words that frequently appear close to each other in the text (the lower, the better). Global exterior knowledge. We introduce a second term based on the Word Attraction Force score of Wang et al. (2014): w′′(pi, pj) = f(pi) × f(pj) d2pi,pj (5) where dpi,pj is the Euclidean distance between the words mapped to pi and pj in a word embedding space4. This component favor paths going through salient words that have high semantic similarity (the higher, the better). The goal is to ensure readability of the compression, by avoiding to generate a sentence jumping from one word to a completely unrelated one. Path re-ranking As in Boudin and Morin (2013), we use a shortest weighted path algorithm to find the K paths between the START and END symbols having the lowest cumulative edge weight: W(P) = |P|−1 X i=1 w′′′(pi, pi+1) (6) 4GoogleNews vectors https://code.google.com/archive/p/word2vec Where |P| is the number of nodes in the path. Paths having less than z words or that do not contain a verb are filtered out (z is a tuning parameter). However, unlike in Boudin and Morin (2013), we rerank the K best paths with the following novel weighting scheme (the lower, the better), and the path with the lowest score is used as the compression: score(P) = W(P) |P| × F(P) × C(P) × D(P) (7) The denominator takes into account the length of the path, and its fluency (F), coverage (C), and diversity (D). F, C, and D are detailed in what follows. Fluency. We estimate the grammaticality of a path with an n-gram language model. In our experiments, we used a trigram model5: F(P) = P|P| i=1 logPr(pi|pi−1 i−n+1) #n-gram (8) where |P| denote path length, and pi and #n-gram are respectively the words and number of n-grams in the path. Coverage. We reward the paths that visit important nouns, verbs and adjectives: C(P) = P pi∈P TW-IDF(pi) #pi (9) where #pi is the number of nouns, verbs and adjectives in the path. The TW-IDF scores are computed as explained in subsection 4.3. Diversity. We cluster all words from the MSCG in the word embedding space by applying the kmeans algorithm. We then measure the diversity of the vocabulary contained in a path as the number 5CMUSphinx English LM: https://cmusphinx.github.io 670 of unique clusters visited by the path, normalized by the length of the path: D(P) = Pk j=1 1∃pi∈P|pi∈clusterj |P| (10) The graphical intuition for this measure is provided in Figure 6. Note that we do not normalize D by the total number of clusters (only by path length) because k is fixed for all candidate paths. 4.4 Budgeted submodular maximization We apply the previous steps separately for all utterance communities, which results in a set S of abstractive sentences (one for each community). This set of sentences can already be considered to be a summary of the meeting. However, it might exceed the maximum size allowed, and still contain some redundancy or off-topic sections unrelated to the general theme of the meeting (e.g., chit-chat). Therefore, we design the following submodular and monotone non-decreasing objective function: f(S) = X si∈S nsiwsi + λ k X j=1 1∃si∈S|si∈groupj (11) where λ ≥0 is the trade-off parameter, nsi is the number of occurrences of word si in S, and wsi is the CoreRank score of si. Then, as explained in subsection 3.5, we obtain a near-optimal subset of abstractive sentences by maximizing f with a greedy algorithm. CoreRank scores and clusters are found as previously described, except that this time they are obtained from the full processed meeting transcription rather than from a single utterance community. 5 Experimental setup 5.1 Datasets We conducted experiments on the widely-used AMI (McCowan et al., 2005) and ICSI (Janin et al., 2003) benchmark datasets. We used the traditional test sets of 20 and 6 meetings respectively for the AMI and ICSI corpora (Riedhammer et al., 2008). Each meeting in the AMI test set is associated with a human abstractive summary of 290 words on average, whereas each meeting in the ICSI test set is associated with 3 human abstractive summaries of respective average sizes 220, 220 and 670 words. For parameter tuning, we constructed development sets of 47 and 25 meetings, respectively for AMI and ICSI, by randomly sampling from the training sets. The word error rate of the ASR transcriptions is respectively of 36% and 37% for AMI and ICSI. 5.2 Baselines We compared our system against 7 baselines, which are listed below and more thoroughly detailed in Appendix B. Note that preprocessing was exactly the same for our system and all baselines. • Random and Longest Greedy are basic baselines recommended by (Riedhammer et al., 2008), • TextRank (Mihalcea and Tarau, 2004), • ClusterRank (Garg et al., 2009), • CoreRank & PageRank submodular (Tixier et al., 2017), • Oracle is the same as the random baseline, but uses the human extractive summaries as input. In addition to the baselines above, we included in our comparison 3 variants of our system using different MSCGs: Our System (Baseline) uses the original MSCG of Filippova (2010), Our System (KeyRank) uses that of Boudin and Morin (2013), and Our System (FluCovRank) that of Mehdad et al. (2013). Details about each approach were given in Section 3. 5.3 Parameter tuning For Our System and each of its variants, we conducted a grid search on the development sets of each corpus, for fixed summary sizes of 350 and 450 words (AMI and ICSI). We searched the following parameters: • n: number of utterance communities (see Section 4.2). We tested values of n ranging from 20 to 60, with steps of 5. This parameter controls how much abstractive should the summary be. If all utterances are assigned to their own singleton community, the MSCG is of no utility, and our framework is extractive. It becomes more and more abstractive as the number of communities decreases. • z: minimum path length (see Section 4.3). We searched values in the range [6, 16] with steps of 2. If a path is shorter than a certain minimum number of words, it often corresponds to an invalid sentence, and should thereby be filtered out. • λ and r, the trade-off parameter and the scaling factor (see Section 4.4). We searched [0, 1] and [0, 2] (respectively) with steps of 0.1. The parameter λ plays a regularization role favoring diversity. 671 The scaling factor makes sure the quality function gain and utterance cost are comparable. The best parameter values for each corpus are summarized in Table 1. λ is mostly non-zero, indicating that it is necessary to include a regularization term in the submodular function. In some cases though, r is equal to zero, which means that utterance costs are not involved in the greedy decision heuristic. These observations contradict the conclusion of Lin (2012) that r = 0 cannot give best results. System AMI ICSI Our System 50, 8, (0.7, 0.5) 40, 14, (0.0, 0.0) Our System (Baseline) 50, 12, (0.3, 0.5) 45, 14, (0.1, 0.0) Our System (KeyRank) 50, 10, (0.2, 0.9) 45, 12, (0.3, 0.4) Our System (FluCovRank) 35, 6, (0.4, 1.0) 50, 10, (0.2, 0.3) Table 1: Optimal parameter values n, z, (λ, r). Apart from the tuning parameters, we set the number of LSA dimensions to 30 and 60 (resp. on AMI and ISCI). The small number of LSA dimensions retained can be explained by the fact that the AMI and ICSI transcriptions feature 532 and 1126 unique words on average, which is much smaller than traditional documents. This is due to relatively small meeting duration, and to the fact that participants tend to stick to the same terms throughout the entire conversation. For the kmeans algorithm, k was set equal to the minimum path length z when doing MSCG path re-ranking (see Equation 10), and to 60 when generating the final summary (see Equation 11). Following Boudin and Morin (2013), the number of shortest weighted paths K was set to 200, which is greater than the K = 100 used by Filippova (2010). Increasing K from 100 improves performance with diminishing returns, but significantly increases complexity. We empirically found 200 to be a good trade-off. 6 Results and Interpretation Metrics. We evaluated performance with the widely-used ROUGE-1, ROUGE-2 and ROUGESU4 metrics (Lin, 2004). These metrics are respectively based on unigram, bigram, and unigram plus skip-bigram overlap with maximum skip distance of 4, and have been shown to be highly correlated with human evaluations (Lin, 2004). ROUGE-2 scores can be seen as a measure of summary readability (Lin and Hovy, 2003; Ganesan et al., 2010). ROUGE-SU4 does not require consecutive matches but is still sensitive to word order. Macro-averaged results for summaries generated from automatic transcriptions can be seen in Figure 7 and Table 2. Table 2 provides detailed comparisons over the fixed budgets that we used for parameter tuning, while Figure 7 shows the performance of the models for budgets ranging from 150 to 500 words. The same information for summaries generated from manual transcriptions is available in Appendix C. Finally, summary examples are available in Appendix D. ROUGE-1. Our systems outperform all baselines on AMI (including Oracle) and all baselines on ICSI (except Oracle). Specifically, Our System is best on ICSI, while Our System (KeyRank) is superior on AMI. We can also observe on Figure 7 that our systems are consistently better throughout the different summary sizes, even though their parameters were tuned for specific sizes only. This shows that the best parameter values are quite robust across the entire budget range. ROUGE-2. Again, our systems (except Our System (Baseline)) outperform all baselines, except Oracle. In addition, Our System and Our System (FluCovRank) consistently improve on Our System (Baseline), which proves that the novel components we introduce improve summary fluency. ROUGE-SU4. ROUGE-SU4 was used to measure the amount of in-order word pairs overlapping. Our systems are competitive with all baselines, including Oracle. Like with ROUGE-1, Our System is better than Our System (KeyRank) on ICSI, whereas the opposite is true on AMI. General remarks. • The summaries of all systems except Oracle were generated from noisy ASR transcriptions, but were compared against human abstractive summaries. ROUGE being based on word overlap, it makes it very difficult to reach very high scores, because many words in the ground truth summaries do not appear in the transcriptions at all. • The scores of all systems are lower on ICSI than on AMI. This can be explained by the fact that on ICSI, the system summaries have to jointly match 3 human abstractive summaries of different content and size, which is much more difficult than matching a single summary. • Our framework is very competitive to Oracle, which is notable since the latter has direct access to the human extractive summaries. Note that Or672 150 200 250 300 350 400 450 500 summary size (words) 0.26 0.28 0.30 0.32 0.34 0.36 0.38 ROUGE-1 F1-score AMI OUR SYSTEM OUR SYSTEM (BASELINE) OUR SYSTEM (KEYRANK) OUR SYSTEM (FLUCOVRANK) ORACLE CORERANK SUBMODULAR PAGERANK SUBMODULAR TEXTRANK CLUSTERRANK LONGEST GREEDY RANDOM 150 200 250 300 350 400 450 500 summary size (words) 0.22 0.24 0.26 0.28 0.30 0.32 ICSI OUR SYSTEM OUR SYSTEM (BASELINE) OUR SYSTEM (KEYRANK) OUR SYSTEM (FLUCOVRANK) ORACLE CORERANK SUBMODULAR PAGERANK SUBMODULAR TEXTRANK CLUSTERRANK LONGEST GREEDY RANDOM Figure 7: ROUGE-1 F-1 scores for various budgets (ASR transcriptions). AMI ROUGE-1 AMI ROUGE-2 AMI ROUGE-SU4 ICSI ROUGE-1 ICSI ROUGE-2 ICSI ROUGE-SU4 R P F-1 R P F-1 R P F-1 R P F-1 R P F-1 R P F-1 Our System 41.83 34.44 37.25 8.22 6.95 7.43 15.83 13.70 14.51 36.99 28.12 31.60 5.41 4.39 4.79 13.10 10.17 11.35 Our System (Baseline) 41.56 34.37 37.11 7.88 6.66 7.11 15.36 13.20 14.02 36.39 27.20 30.80 5.19 4.12 4.55 12.59 9.70 10.86 Our System (KeyRank) 42.43 35.01 37.86 8.72 7.29 7.84 16.19 13.76 14.71 35.95 27.00 30.52 4.64 3.64 4.04 12.43 9.23 10.50 Our System (FluCovRank) 41.84 34.61 37.37 8.29 6.92 7.45 16.28 13.48 14.58 36.27 27.56 31.00 5.56 4.35 4.83 13.47 9.85 11.29 Oracle 40.49 34.65 36.73 8.07 7.35 7.55 15.00 14.03 14.26 37.91 28.39 32.12 5.73 4.82 5.18 13.35 10.73 11.80 CoreRank Submodular 41.14 32.93 36.13 8.06 6.88 7.33 14.84 13.91 14.18 35.22 26.34 29.82 4.36 3.76 4.00 12.11 9.58 10.61 PageRank Submodular 40.84 33.08 36.10 8.27 6.88 7.42 15.37 13.71 14.32 36.05 26.69 30.40 4.82 4.16 4.42 12.19 10.39 11.14 TextRank 39.55 32.60 35.25 7.67 6.43 6.90 14.87 12.87 13.62 34.89 26.33 29.70 4.60 3.74 4.09 12.42 9.43 10.64 ClusterRank 39.36 32.53 35.14 7.14 6.05 6.46 14.34 12.80 13.35 32.63 24.44 27.64 4.03 3.44 3.68 11.04 8.88 9.77 Longest Greedy 37.31 30.93 33.35 5.77 4.71 5.11 13.79 11.11 12.15 35.57 26.74 30.23 4.84 3.88 4.27 13.09 9.46 10.90 Random 39.42 32.48 35.13 6.88 5.89 6.26 14.07 12.70 13.17 34.78 25.75 29.28 4.19 3.51 3.78 11.61 9.37 10.29 Table 2: Macro-averaged results for 350 and 450 word summaries (ASR transcriptions). acle does not reach very high ROUGE scores because the overlap between the human extractive and abstractive summaries is low (19% and 29%, respectively on AMI and ICSI test sets). 7 Conclusion and Next Steps Our framework combines the strengths of 6 approaches that had previously been applied to 3 different tasks (keyword extraction, multi-sentence compression, and summarization) into a unified, fully unsupervised end-to-end summarization framework, and introduces some novel components. Rigorous evaluation on the AMI and ICSI corpora shows that we reach state-of-the-art performance, and generate reasonably grammatical abstractive summaries despite taking noisy utterances as input and not relying on any annotations or training data. Finally, thanks to its fully unsupervised nature, our method is applicable to other languages than English in an almost out-of-thebox manner. Our framework was developed for the meeting domain. Indeed, our generative component, the multi-sentence compression graph (MSCG), needs redundancy to perform well. Such redundancy is typically present in meeting speech but not in traditional documents. In addition, the MSCG is by design robust to noise, and our custom path re-ranking strategy, based on graph degeneracy, makes it even more robust to noise. As a result, our framework is advantaged on ASR input. Finally, we use a language model to favor fluent paths, which is crucial when working with (meeting) speech but not that important when dealing with well-formed input. Future efforts should be dedicated to improving the community detection phase and generating more abstractive sentences, probably by harnessing Deep Learning. However, the lack of large training sets for the meeting domain is an obstacle to the use of neural approaches. Acknowledgments We are grateful to the four anonymous reviewers for their detailed and constructive feedback. This research was supported in part by the OpenPaaS::NG project. 673 References Joonhyun Bae and Sangwook Kim. 2014. Identifying and ranking influential spreaders in complex networks by neighborhood coreness. Physica A: Statistical Mechanics and its Applications 395:549–559. Vladimir Batagelj and Matjaˇz Zaverˇsnik. 2002. Generalized cores. arXiv preprint cs/0202039 . Florian Boudin and Emmanuel Morin. 2013. Keyphrase extraction for n-best reranking in multi-sentence compression. In Proceedings of the 2013 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies. Association for Computational Linguistics, pages 298–305. http://aclweb.org/anthology/N13-1030. Katja Filippova. 2010. Multi-sentence compression: Finding shortest paths in word graphs. In Proceedings of the 23rd International Conference on Computational Linguistics (Coling 2010). Coling 2010 Organizing Committee, pages 322–330. http://aclweb.org/anthology/C10-1037. Kavita Ganesan, ChengXiang Zhai, and Jiawei Han. 2010. Opinosis: A graph based approach to abstractive summarization of highly redundant opinions. In Proceedings of the 23rd International Conference on Computational Linguistics (Coling 2010). Coling 2010 Organizing Committee, pages 340–348. http://aclweb.org/anthology/C10-1039. Nikhil Garg, Benoit Favre, Korbinian Reidhammer, and Dilek Hakkani-T¨ur. 2009. Clusterrank: a graph based method for meeting summarization. In Tenth Annual Conference of the International Speech Communication Association. A. Janin, D. Baron, J. Edwards, D. Ellis, D. Gelbart, N. Morgan, B. Peskin, T. Pfau, E. Shriberg, A. Stolcke, and C. Wooters. 2003. The icsi meeting corpus. In Acoustics, Speech, and Signal Processing, 2003. Proceedings. (ICASSP ’03). 2003 IEEE International Conference on. volume 1, pages I–364–I–367 vol.1. https://doi.org/10.1109/ICASSP.2003.1198793. Maksim Kitsak, Lazaros K Gallos, Shlomo Havlin, Fredrik Liljeros, Lev Muchnik, H Eugene Stanley, and Hern´an A Makse. 2010. Identification of influential spreaders in complex networks. Nature Physics 6(11):888–893. https://doi.org/10.1038/nphys1746. Matt J. Kusner, Yu Sun, Nicholas I. Kolkin, and Kilian Q. Weinberger. 2015. From word embeddings to document distances. In Proceedings of the 32Nd International Conference on International Conference on Machine Learning - Volume 37. JMLR.org, ICML’15, pages 957–966. Chin-Yew Lin. 2004. Rouge: A package for automatic evaluation of summaries. In Text Summarization Branches Out. http://aclweb.org/anthology/W041013. Chin-Yew Lin and Eduard Hovy. 2003. Automatic evaluation of summaries using n-gram co-occurrence statistics. In Proceedings of the 2003 Human Language Technology Conference of the North American Chapter of the Association for Computational Linguistics. http://aclweb.org/anthology/N03-1020. Hui Lin. 2012. Submodularity in natural language processing: algorithms and applications. University of Washington. Hui Lin and Jeff Bilmes. 2010. Multi-document summarization via budgeted maximization of submodular functions. In Human Language Technologies: The 2010 Annual Conference of the North American Chapter of the Association for Computational Linguistics. Association for Computational Linguistics, pages 912–920. http://aclweb.org/anthology/N101134. Laurens van der Maaten and Geoffrey Hinton. 2008. Visualizing data using t-sne. Journal of machine learning research 9(Nov):2579–2605. Iain McCowan, Jean Carletta, W Kraaij, S Ashby, S Bourban, M Flynn, M Guillemot, T Hain, J Kadlec, V Karaiskos, et al. 2005. The ami meeting corpus. In Proceedings of the 5th International Conference on Methods and Techniques in Behavioral Research. volume 88. Yashar Mehdad, Giuseppe Carenini, Frank Tompa, and Raymond T. NG. 2013. Abstractive meeting summarization with entailment and fusion. In Proceedings of the 14th European Workshop on Natural Language Generation. Association for Computational Linguistics, pages 136–146. http://aclweb.org/anthology/W13-2117. Polykarpos Meladianos, Antoine Tixier, Ioannis Nikolentzos, and Michalis Vazirgiannis. 2017. Realtime keyword extraction from conversations. In Proceedings of the 15th Conference of the European Chapter of the Association for Computational Linguistics: Volume 2, Short Papers. Association for Computational Linguistics, pages 462–467. http://aclweb.org/anthology/E17-2074. Rada Mihalcea and Paul Tarau. 2004. Textrank: Bringing order into text. In Proceedings of the 2004 Conference on Empirical Methods in Natural Language Processing. http://aclweb.org/anthology/W04-3252. George A. Miller. 1995. Wordnet: A lexical database for english. Commun. ACM 38(11):39–41. https://doi.org/10.1145/219717.219748. Gabriel Murray, Giuseppe Carenini, and Raymond Ng. 2012. Using the omega index for evaluating abstractive community detection. In Proceedings of Workshop on Evaluation Metrics and System Comparison for Automatic Summarization. Association for Computational Linguistics, pages 10–18. http://aclweb.org/anthology/W12-2602. 674 Korbinian Riedhammer, Benoit Favre, and Dilek Hakkani-T¨ur. 2010. Long story short - global unsupervised models for keyphrase based meeting summarization. Speech Commun. 52(10):801–815. https://doi.org/10.1016/j.specom.2010.06.002. Korbinian Riedhammer, Dan Gillick, Benoit Favre, and Dilek Hakkani-T¨ur. 2008. Packing the meeting summarization knapsack. In Ninth Annual Conference of the International Speech Communication Association. Franc¸ois Rousseau and Michalis Vazirgiannis. 2013. Graph-of-word and tw-idf: New approach to ad hoc ir. In Proceedings of the 22Nd ACM International Conference on Information & Knowledge Management. ACM, New York, NY, USA, CIKM ’13, pages 59–68. https://doi.org/10.1145/2505515.2505671. Stephen B Seidman. 1983. Network structure and minimum degree. Social networks 5(3):269–287. https://doi.org/10.1016/0378-8733(83)90028-X. Antoine Tixier, Fragkiskos Malliaros, and Michalis Vazirgiannis. 2016a. A graph degeneracy-based approach to keyword extraction. In Proceedings of the 2016 Conference on Empirical Methods in Natural Language Processing. Association for Computational Linguistics, pages 1860–1870. https://doi.org/10.18653/v1/D16-1191. Antoine Tixier, Polykarpos Meladianos, and Michalis Vazirgiannis. 2017. Combining graph degeneracy and submodularity for unsupervised extractive summarization. In Proceedings of the Workshop on New Frontiers in Summarization. Association for Computational Linguistics, pages 48–58. http://aclweb.org/anthology/W17-4507. Antoine Tixier, Konstantinos Skianis, and Michalis Vazirgiannis. 2016b. Gowvis: A web application for graph-of-words-based text visualization and summarization. In Proceedings of ACL-2016 System Demonstrations. Association for Computational Linguistics, pages 151–156. https://doi.org/10.18653/v1/P16-4026. Rui Wang, Wei Liu, and Chris McDonald. 2014. Corpus-independent generic keyphrase extraction using word embedding vectors. In Software Engineering Research Conference. volume 39.
2018
62
Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Long Papers), pages 675–686 Melbourne, Australia, July 15 - 20, 2018. c⃝2018 Association for Computational Linguistics 675 Fast Abstractive Summarization with Reinforce-Selected Sentence Rewriting Yen-Chun Chen and Mohit Bansal UNC Chapel Hill {yenchun, mbansal}@cs.unc.edu Abstract Inspired by how humans summarize long documents, we propose an accurate and fast summarization model that first selects salient sentences and then rewrites them abstractively (i.e., compresses and paraphrases) to generate a concise overall summary. We use a novel sentence-level policy gradient method to bridge the nondifferentiable computation between these two neural networks in a hierarchical way, while maintaining language fluency. Empirically, we achieve the new state-of-theart on all metrics (including human evaluation) on the CNN/Daily Mail dataset, as well as significantly higher abstractiveness scores. Moreover, by first operating at the sentence-level and then the word-level, we enable parallel decoding of our neural generative model that results in substantially faster (10-20x) inference speed as well as 4x faster training convergence than previous long-paragraph encoder-decoder models. We also demonstrate the generalization of our model on the test-only DUC2002 dataset, where we achieve higher scores than a state-of-the-art model. 1 Introduction The task of document summarization has two main paradigms: extractive and abstractive. The former method directly chooses and outputs the salient sentences (or phrases) in the original document (Jing and McKeown, 2000; Knight and Marcu, 2000; Martins and Smith, 2009; BergKirkpatrick et al., 2011). The latter abstractive approach involves rewriting the summary (Banko et al., 2000; Zajic et al., 2004), and has seen substantial recent gains due to neural sequence-tosequence models (Chopra et al., 2016; Nallapati et al., 2016; See et al., 2017; Paulus et al., 2018). Abstractive models can be more concise by performing generation from scratch, but they suffer from slow and inaccurate encoding of very long documents, with the attention model being required to look at all encoded words (in long paragraphs) for decoding each generated summary word (slow, one by one sequentially). Abstractive models also suffer from redundancy (repetitions), especially when generating multi-sentence summary. To address both these issues and combine the advantages of both paradigms, we propose a hybrid extractive-abstractive architecture, with policy-based reinforcement learning (RL) to bridge together the two networks. Similar to how humans summarize long documents, our model first uses an extractor agent to select salient sentences or highlights, and then employs an abstractor network to rewrite (i.e., compress and paraphrase) each of these extracted sentences. To overcome the non-differentiable behavior of our extractor and train on available document-summary pairs without saliency label, we next use actorcritic policy gradient with sentence-level metric rewards to connect these two neural networks and to learn sentence saliency. We also avoid common language fluency issues (Paulus et al., 2018) by preventing the policy gradients from affecting the abstractive summarizer’s word-level training, which is supported by our human evaluation study. Our sentence-level reinforcement learning takes into account the word-sentence hierarchy, which better models the language structure and makes parallelization possible. Our extractor combines reinforcement learning and pointer networks, which is inspired by Bello et al. (2017)’s attempt to solve the Traveling Salesman Problem. Our abstractor is a simple encoder-aligner-decoder 676 model (with copying) and is trained on pseudo document-summary sentence pairs obtained via simple automatic matching criteria. Thus, our method incorporates the abstractive paradigm’s advantages of concisely rewriting sentences and generating novel words from the full vocabulary, yet it adopts intermediate extractive behavior to improve the overall model’s quality, speed, and stability. Instead of encoding and attending to every word in the long input document sequentially, our model adopts a human-inspired coarse-to-fine approach that first extracts all the salient sentences and then decodes (rewrites) them (in parallel). This also avoids almost all redundancy issues because the model has already chosen non-redundant salient sentences to abstractively summarize (but adding an optional final reranker component does give additional gains by removing the fewer across-sentence repetitions). Empirically, our approach is the new state-ofthe-art on all ROUGE metrics (Lin, 2004) as well as on METEOR (Denkowski and Lavie, 2014) of the CNN/Daily Mail dataset, achieving statistically significant improvements over previous models that use complex long-encoder, copy, and coverage mechanisms (See et al., 2017). The test-only DUC-2002 improvement also shows our model’s better generalization than this strong abstractive system. In addition, we surpass the popular lead-3 baseline on all ROUGE scores with an abstractive model. Moreover, our sentence-level abstractive rewriting module also produces substantially more (3x) novel N-grams that are not seen in the input document, as compared to the strong flat-structured model of See et al. (2017). This empirically justifies that our RL-guided extractor has learned sentence saliency, rather than benefiting from simply copying longer sentences. We also show that our model maintains the same level of fluency as a conventional RNN-based model because the reward does not leak to our abstractor’s word-level training. Finally, our model’s training is 4x and inference is more than 20x faster than the previous state-of-the-art. The optional final reranker gives further improvements while maintaining a 7x speedup. Overall, our contribution is three fold: First we propose a novel sentence-level RL technique for the well-known task of abstractive summarization, effectively utilizing the word-then-sentence hierarchical structure without annotated matching sentence-pairs between the document and ground truth summary. Next, our model achieves the new state-of-the-art on all metrics of multiple versions of a popular summarization dataset (as well as a test-only dataset) both extractively and abstractively, without loss in language fluency (also demonstrated via human evaluation and abstractiveness scores). Finally, our parallel decoding results in a significant 10-20x speed-up over the previous best neural abstractive summarization system with even better accuracy.1 2 Model In this work, we consider the task of summarizing a given long text document into several (ordered) highlights, which are then combined to form a multi-sentence summary. Formally, given a training set of document-summary pairs {xi, yi}N i=1, our goal is to approximate the function h : X →Y, X = {xi}N i=1, Y = {yi}N i=1 such that h(xi) = yi, 1 ≤i ≤N. Furthermore, we assume there exists an abstracting function g defined as: ∀s ∈Si, ∃d ∈Di such that g(d) = s, 1 ≤i ≤N, where Si is the set of summary sentences in xi and Di the set of document sentences in yi. i.e., in any given pair of document and summary, every summary sentence can be produced from some document sentence. For simplicity, we omit subscript i in the remainder of the paper. Under this assumption, we can further define another latent function f : X →Dn that satisfies f(x) = {dt}n j=1 and y = h(x) = [g(d1), g(d2), . . . , g(dn)], where [, ] denotes sentence concatenation. This latent function f can be seen as an extractor that chooses the salient (ordered) sentences in a given document for the abstracting function g to rewrite. Our overall model consists of these two submodules, the extractor agent and the abstractor network, to approximate the above-mentioned f and g, respectively. 2.1 Extractor Agent The extractor agent is designed to model f, which can be thought of as extracting salient sentences from the document. We exploit a hierarchical neural model to learn the sentence representations of the document and a ‘selection network’ to extract sentences based on their representations. 1We are releasing our code, best pretrained models, as well as output summaries, to promote future research: https://github.com/ChenRocks/fast_abs_rl 677 bi-LSTM bi-LSTM bi-LSTM bi-LSTM Encoded Sentence Representations r1 r2 r3 r4 h0 h1 h4 r1 r2 r3 r4 h0 h1 h4 r1 r2 r3 r4 h0 h1 h4 r1 r2 r3 r4 h0 h1 h4 LSTM LSTM LSTM Extraction Probabilities (Policy) r1 r2 r3 r4 h0 h1 h4 r1 r2 r3 r4 h0 h1 h4 r1 r2 r3 r4 h0 h1 h4 Context-aware Sent. Reps. (from previous extraction step) CONV Embedded Word Vectors Convolutional Sentence Encoder Figure 1: Our extractor agent: the convolutional encoder computes representation rj for each sentence. The RNN encoder (blue) computes context-aware representation hj and then the RNN decoder (green) selects sentence jt at time step t. With jt selected, hjt will be fed into the decoder at time t + 1. 2.1.1 Hierarchical Sentence Representation We use a temporal convolutional model (Kim, 2014) to compute rj, the representation of each individual sentence in the documents (details in supplementary). To further incorporate global context of the document and capture the long-range semantic dependency between sentences, a bidirectional LSTM-RNN (Hochreiter and Schmidhuber, 1997; Schuster et al., 1997) is applied on the convolutional output. This enables learning a strong representation, denoted as hj for the j-th sentence in the document, that takes into account the context of all previous and future sentences in the same document. 2.1.2 Sentence Selection Next, to select the extracted sentences based on the above sentence representations, we add another LSTM-RNN to train a Pointer Network (Vinyals et al., 2015), to extract sentences recurrently. We calculate the extraction probability by: ut j =      v⊤ p tanh(Wp1hj + Wp2et) if jt ̸= jk ∀k < t −∞ otherwise (1) P(jt|j1, . . . , jt−1) = softmax(ut) (2) where et’s are the output of the glimpse operation (Vinyals et al., 2016): at j = v⊤ g tanh(Wg1hj + Wg2zt) (3) αt = softmax(at) (4) et = X j αt jWg1hj (5) Abstractor d1 d2 d3 d4 d5 djt g(djt) st d1 d2 d3 d4 d5 djt g(djt) st Summary Sentence (ground truth) d1 d2 d3 d4 d5 djt g(djt) st Generated Sentence Reward RL Agent Extractor Policy Gradient Update Observation d1 d2 d3 d4 d5 djt g(djt) st d1 d2 d3 d4 d5 djt g(djt) st d1 d2 d3 d4 d5 djt g(djt) st d1 d2 d3 d4 d5 djt g(djt) st Document Sentences Action (extract sent.) Figure 2: Reinforced training of the extractor (for one extraction step) and its interaction with the abstractor. For simplicity, the critic network is not shown. Note that all d’s and st are raw sentences, not vector representations. In Eqn. 3, zt is the output of the added LSTMRNN (shown in green in Fig. 1) which is referred to as the decoder. All the W’s and v’s are trainable parameters. At each time step t, the decoder performs a 2-hop attention mechanism: It first attends to hj’s to get a context vector et and then attends to hj’s again for the extraction probabilities.2 This model is essentially classifying all sentences of the document at each extraction step. An illustration of the whole extractor is shown in Fig. 1. 2.2 Abstractor Network The abstractor network approximates g, which compresses and paraphrases an extracted document sentence to a concise summary sentence. We 2Note that we force-zero the extraction prob. of already extracted sentences so as to prevent the model from using repeating document sentences and suffering from redundancy. This is non-differentiable and hence only done in RL training. 678 use the standard encoder-aligner-decoder (Bahdanau et al., 2015; Luong et al., 2015). We add the copy mechanism3 to help directly copy some outof-vocabulary (OOV) words (See et al., 2017). For more details, please refer to the supplementary. 3 Learning Given that our extractor performs a nondifferentiable hard extraction, we apply standard policy gradient methods to bridge the backpropagation and form an end-to-end trainable (stochastic) computation graph. However, simply starting from a randomly initialized network to train the whole model in an end-to-end fashion is infeasible. When randomly initialized, the extractor would often select sentences that are not relevant, so it would be difficult for the abstractor to learn to abstractively rewrite. On the other hand, without a well-trained abstractor the extractor would get noisy reward, which leads to a bad estimate of the policy gradient and a sub-optimal policy. We hence propose optimizing each sub-module separately using maximumlikelihood (ML) objectives: train the extractor to select salient sentences (fit f) and the abstractor to generate shortened summary (fit g). Finally, RL is applied to train the full model end-to-end (fit h). 3.1 Maximum-Likelihood Training for Submodules Extractor Training: In Sec. 2.1.2, we have formulated our sentence selection as classification. However, most of the summarization datasets are end-to-end document-summary pairs without extraction (saliency) labels for each sentence. Hence, we propose a simple similarity method to provide a ‘proxy’ target label for the extractor. Similar to the extractive model of Nallapati et al. (2017), for each ground-truth summary sentence, we find the most similar document sentence djt by:4 jt = argmaxi(ROUGE-Lrecall(di, st)) (6) Given these proxy training labels, the extractor is then trained to minimize the cross-entropy loss. 3We use the terminology of copy mechanism (originally named pointer-generator) in order to avoid confusion with the pointer network (Vinyals et al., 2015). 4Nallapati et al. (2017) selected sentences greedily to maximize the global summary-level ROUGE, whereas we match exactly 1 document sentence for each GT summary sentence based on the individual sentence-level score. Abstractor Training: For the abstractor training, we create training pairs by taking each summary sentence and pairing it with its extracted document sentence (based on Eqn. 6). The network is trained as an usual sequence-to-sequence model to minimize the cross-entropy loss L(θabs) = −1 M PM m=1 logPθabs(wm|w1:m−1) of the decoder language model at each generation step, where θabs is the set of trainable parameters of the abstractor and wm the mth generated word. 3.2 Reinforce-Guided Extraction Here we explain how policy gradient techniques are applied to optimize the whole model. To make the extractor an RL agent, we can formulate a Markov Decision Process (MDP)5: at each extraction step t, the agent observes the current state ct = (D, djt−1), samples an action jt ∼ πθa,ω(ct, j) = P(j) from Eqn. 2 to extract a document sentence and receive a reward6 r(t + 1) = ROUGE-LF1(g(djt), st) (7) after the abstractor summarizes the extracted sentence djt. We denote the trainable parameters of the extractor agent by θ = {θa, ω} for the decoder and hierarchical encoder respectively. We can then train the extractor with policy-based RL. We illustrate this process in Fig. 2. The vanilla policy gradient algorithm, REINFORCE (Williams, 1992), is known for high variance. To mitigate this problem, we add a critic network with trainable parameters θc to predict the state-value function V πθa,ω(c). The predicted value of critic bθc,ω(c) is called the ‘baseline’, which is then used to estimate the advantage function: Aπθ(c, j) = Qπθa,ω(c, j) −V πθa,ω(c) because the total return Rt is an estimate of actionvalue function Q(ct, jt). Instead of maximizing Q(ct, jt) as done in REINFORCE, we maximize Aπθ(c, j) with the following policy gradient: ∇θa,ωJ(θa, ω) = E[∇θa,ωlogπθ(c, j)Aπθ(c, j)] (8) And the critic is trained to minimize the square loss: Lc(θc, ω) = (bθc,ω(ct) −Rt)2. This is 5Strictly speaking, this is a Partially Observable Markov Decision Process (POMDP). We approximate it as an MDP by assuming that the RNN hidden state contains all past info. 6In Eqn. 6, we use ROUGE-recall because we want the extracted sentence to contain as much information as possible for rewriting. Nevertheless, for Eqn. 7, ROUGE-F1 is more suitable because the abstractor g is supposed to rewrite the extracted sentence d to be as concise as the ground truth s. 679 known as the Advantage Actor-Critic (A2C), a synchronous variant of A3C (Mnih et al., 2016). For more A2C details, please refer to the supp. Intuitively, our RL training works as follow: If the extractor chooses a good sentence, after the abstractor rewrites it the ROUGE match would be high and thus the action is encouraged. If a bad sentence is chosen, though the abstractor still produces a compressed version of it, the summary would not match the ground truth and the low ROUGE score discourages this action. Our RL with a sentence-level agent is a novel attempt in neural summarization. We use RL as a saliency guide without altering the abstractor’s language model, while previous work applied RL on the word-level, which could be prone to gaming the metric at the cost of language fluency.7 Learning how many sentences to extract: In a typical RL setting like game playing, an episode is usually terminated by the environment. On the other hand, in text summarization, the agent does not know in advance how many summary sentence to produce for a given article (since the desired length varies for different downstream applications). We make an important yet simple, intuitive adaptation to solve this: by adding a ‘stop’ action to the policy action space. In the RL training phase, we add another set of trainable parameters vEOE (EOE stands for ‘End-Of-Extraction’) with the same dimension as the sentence representation. The pointer-network decoder treats vEOE as one of the extraction candidates and hence naturally results in a stop action in the stochastic policy. We set the reward for the agent performing EOE to ROUGE-1F1([{g(djt)}t], [{st}t]); whereas for any extraneous, unwanted extraction step, the agent receives zero reward. The model is therefore encouraged to extract when there are still remaining ground-truth summary sentences (to accumulate intermediate reward), and learn to stop by optimizing a global ROUGE and avoiding extra extraction.8 Overall, this modification allows dy7During this RL training of the extractor, we keep the abstractor parameters fixed. Because the input sentences for the abstractor are extracted by an intermediate stochastic policy of the extractor, it is impossible to find the correct target summary for the abstractor to fit g with ML objective. Though it is possible to optimize the abstractor with RL, in out preliminary experiments we found that this does not improve the overall ROUGE, most likely because this RL optimizes at a sentence-level and can add across-sentence redundancy. We achieve SotA results without this abstractor-level RL. 8We use ROUGE-1 for terminal reward because it is a better measure of bag-of-words information (i.e., has all the namic decisions of number-of-sentences based on the input document, eliminates the need for tuning a fixed number of steps, and enables a data-driven adaptation for any specific dataset/application. 3.3 Repetition-Avoiding Reranking Existing abstractive summarization systems on long documents suffer from generating repeating and redundant words and phrases. To mitigate this issue, See et al. (2017) propose the coverage mechanism and Paulus et al. (2018) incorporate tri-gram avoidance during beam-search at testtime. Our model without these already performs well because the summary sentences are generated from mutually exclusive document sentences, which naturally avoids redundancy. However, we do get a small further boost to the summary quality by removing a few ‘across-sentence’ repetitions, via a simple reranking strategy: At sentence-level, we apply the same beam-search tri-gram avoidance (Paulus et al., 2018). We keep all k sentence candidates generated by beam search, where k is the size of the beam. Next, we then rerank all kn combinations of the n generated summary sentence beams. The summaries are reranked by the number of repeated N-grams, the smaller the better. We also apply the diverse decoding algorithm described in Li et al. (2016) (which has almost no computation overhead) so as to get the above approach to produce useful diverse reranking lists. We show how much the redundancy affects the summarization task in Sec. 6.2. 4 Related Work Early summarization works mostly focused on extractive and compression based methods (Jing and McKeown, 2000; Knight and Marcu, 2000; Clarke and Lapata, 2010; Berg-Kirkpatrick et al., 2011; Filippova et al., 2015). Recent large-sized corpora attracted neural methods for abstractive summarization (Rush et al., 2015; Chopra et al., 2016). Some of the recent success in neural abstractive models include hierarchical attention (Nallapati et al., 2016), coverage (Suzuki and Nagata, 2016; Chen et al., 2016; See et al., 2017), RL based metric optimization (Paulus et al., 2018), graph-based attention (Tan et al., 2017), and the copy mechanism (Miao and Blunsom, 2016; Gu et al., 2016; See et al., 2017). important information been generated); while ROUGE-L is used as intermediate rewards since it is known for better measurement of language fluency within a local sentence. 680 Our model shares some high-level intuition with extract-then-compress methods. Earlier attempts in this paradigm used Hidden Markov Models and rule-based systems (Jing and McKeown, 2000), statistical models based on parse trees (Knight and Marcu, 2000), and integer linear programming based methods (Martins and Smith, 2009; Gillick and Favre, 2009; Clarke and Lapata, 2010; BergKirkpatrick et al., 2011). Recent approaches investigated discourse structures (Louis et al., 2010; Hirao et al., 2013; Kikuchi et al., 2014; Wang et al., 2015), graph cuts (Qian and Liu, 2013), and parse trees (Li et al., 2014; Bing et al., 2015). For neural models, Cheng and Lapata (2016) used a second neural net to select words from an extractor’s output. Our abstractor does not merely ‘compress’ the sentences but generatively produce novel words. Moreover, our RL bridges the extractor and the abstractor for end-to-end training. Reinforcement learning has been used to optimize the non-differential metrics of language generation and to mitigate exposure bias (Ranzato et al., 2016; Bahdanau et al., 2017). Henß et al. (2015) use Q-learning based RL for extractive summarization. Paulus et al. (2018) use RL policy gradient methods for abstractive summarization, utilizing sequence-level metric rewards with curriculum learning (Ranzato et al., 2016) or weighted ML+RL mixed loss (Paulus et al., 2018) for stability and language fluency. We use sentence-level rewards to optimize the extractor while keeping our ML trained abstractor decoder fixed, so as to achieve the best of both worlds. Training a neural network to use another fixed network has been investigated in machine translation for better decoding (Gu et al., 2017a) and real-time translation (Gu et al., 2017b). They used a fixed pretrained translator and applied policy gradient techniques to train another task-specific network. In question answering (QA), Choi et al. (2017) extract one sentence and then generate the answer from the sentence’s vector representation with RL bridging. Another recent work attempted a new coarse-to-fine attention approach on summarization (Ling and Rush, 2017) and found desired sharp focus properties for scaling to larger inputs (though without metric improvements). Very recently (concurrently), Narayan et al. (2018) use RL for ranking sentences in pure extraction-based summarization and C¸ elikyilmaz et al. (2018) investigate multiple communicating encoder agents to enhance the copying abstractive summarizer. Finally, there are some loosely-related recent works: Zhou et al. (2017) proposed selective gate to improve the attention in abstractive summarization. Tan et al. (2018) used an extract-thensynthesis approach on QA, where an extraction model predicts the important spans in the passage and then another synthesis model generates the final answer. Swayamdipta et al. (2017) attempted cascaded non-recurrent small networks on extractive QA, resulting a scalable, parallelizable model. Fan et al. (2017) added controlling parameters to adapt the summary to length, style, and entity preferences. However, none of these used RL to bridge the non-differentiability of neural models. 5 Experimental Setup Please refer to the supplementary for full training details (all hyperparameter tuning was performed on the validation set). We use the CNN/Daily Mail dataset (Hermann et al., 2015) modified for summarization (Nallapati et al., 2016). Because there are two versions of the dataset, original text and entity anonymized, we show results on both versions of the dataset for a fair comparison to prior works. The experiment runs training and evaluation for each version separately. Despite the fact that the 2 versions have been considered separately by the summarization community as 2 different datasets, we use same hyper-parameter values for both dataset versions to show the generalization of our model. We also show improvements on the DUC-2002 dataset in a test-only setup. 5.1 Evaluation Metrics For all the datasets, we evaluate standard ROUGE1, ROUGE-2, and ROUGE-L (Lin, 2004) on fulllength F1 (with stemming) following previous works (Nallapati et al., 2017; See et al., 2017; Paulus et al., 2018). Following See et al. (2017), we also evaluate on METEOR (Denkowski and Lavie, 2014) for a more thorough analysis. 5.2 Modular Extractive vs. Abstractive Our hybrid approach is capable of both extractive and abstractive (i.e., rewriting every sentence) summarization. The extractor alone performs extractive summarization. To investigate the effect of the recurrent extractor (rnn-ext), we implement a feed-forward extractive baseline ff-ext (details in supplementary). It is also possible to apply RL 681 Models ROUGE-1 ROUGE-2 ROUGE-L METEOR Extractive Results lead-3 (See et al., 2017) 40.34 17.70 36.57 22.21 Narayan et al. (2018) 40.0 18.2 36.6 ff-ext 40.63 18.35 36.82 22.91 rnn-ext 40.17 18.11 36.41 22.81 rnn-ext + RL 41.47 18.72 37.76 22.35 Abstractive Results See et al. (2017) (w/o coverage) 36.44 15.66 33.42 16.65 See et al. (2017) 39.53 17.28 36.38 18.72 Fan et al. (2017) (controlled) 39.75 17.29 36.54 ff-ext + abs 39.30 17.02 36.93 20.05 rnn-ext + abs 38.38 16.12 36.04 19.39 rnn-ext + abs + RL 40.04 17.61 37.59 21.00 rnn-ext + abs + RL + rerank 40.88 17.80 38.54 20.38 Table 1: Results on the original, non-anonymized CNN/Daily Mail dataset. Adding RL gives statistically significant improvements for all metrics over non-RL rnn-ext models (and over the state-of-the-art See et al. (2017)) in both extractive and abstractive settings with p < 0.01. Adding the extra reranking stage yields statistically significant better results in terms of all ROUGE metrics with p < 0.01. to extractor without using the abstractor (rnn-ext + RL).9 Benefiting from the high modularity of our model, we can make our summarization system abstractive by simply applying the abstractor on the extracted sentences. Our abstractor rewrites each sentence and generates novel words from a large vocabulary, and hence every word in our overall summary is generated from scratch; making our full model categorized into the abstractive paradigm.10 We run experiments on separately trained extractor/abstractor (ff-ext + abs, rnn-ext + abs) and the reinforced full model (rnn-ext + abs + RL) as well as the final reranking version (rnn-ext + abs + RL + rerank). 6 Results For easier comparison, we show separate tables for the original-text vs. anonymized versions – Table 1 and Table 2, respectively. Overall, our model achieves strong improvements and the new state-of-the-art on both extractive and abstractive settings for both versions of the CNN/DM dataset (with some comparable results on the anonymized version). Moreover, Table 3 shows the generalization of our abstractive system to an out-ofdomain test-only setup (DUC-2002), where our model achieves better scores than See et al. (2017). 6.1 Extractive Summarization In the extractive paradigm, we compare our model with the extractive model from Nallapati et al. 9In this case the abstractor function g(d) = d. 10Note that the abstractive CNN/DM dataset does not include any human-annotated extraction label, and hence our models do not receive any direct extractive supervision. Models R-1 R-2 R-L Extractive Results lead-3 (Nallapati et al., 2017) 39.2 15.7 35.5 Nallapati et al. (2017) 39.6 16.2 35.3 ff-ext 39.51 16.85 35.80 rnn-ext 38.97 16.65 35.32 rnn-ext + RL 40.13 16.58 36.47 Abstractive Results Nallapati et al. (2016) 35.46 13.30 32.65 Fan et al. (2017) (controlled) 38.68 15.40 35.47 Paulus et al. (2018) (ML) 38.30 14.81 35.49 Paulus et al. (2018) (RL+ML) 39.87 15.82 36.90 ff-ext + abs 38.73 15.70 36.33 rnn-ext + abs 37.58 14.68 35.24 rnn-ext + abs + RL 38.80 15.66 36.37 rnn-ext + abs + RL + rerank 39.66 15.85 37.34 Table 2: ROUGE for anonymized CNN/DM. (2017) and a strong lead-3 baseline. For producing our summary, we simply concatenate the extracted sentences from the extractors. From Table 1 and Table 2, we can see that our feed-forward extractor out-performs the lead-3 baseline, empirically showing that our hierarchical sentence encoding model is capable of extracting salient sentences.11 The reinforced extractor performs the best, because of the ability to get the summary-level reward and the reduced train-test mismatch of feeding the previous extraction decision. The improvement over lead-3 is consistent across both tables. In Table 2, it outperforms the previous best neural extractive model (Nallapati et al., 2017). In Table 1, our model also outperforms a recent, con11The ff-ext model outperforms rnn-ext possibly because it does not predict sentence ordering; thus is easier to optimize and the n-gram based metrics do not consider sentence ordering. Also note that in our MDP formulation, we cannot apply RL on ff-ext due to its historyless nature. Even if applied naively, there is no mean for the feed-forward model to learn the EOE described in Sec. 3.2. 682 Models R-1 R-2 R-L See et al. (2017) 37.22 15.78 33.90 rnn-ext + abs + RL 39.46 17.34 36.72 Table 3: Generalization to DUC-2002 (F1). current work by Narayan et al. (2018), showing that our pointer-network extractor and reward formulations are very effective when combined with A2C RL. 6.2 Abstractive Summarization After applying the abstractor, the ff-ext based model still out-performs the rnn-ext model. Both combined models exceed the pointer-generator model (See et al., 2017) without coverage by a large margin for all metrics, showing the effectiveness of our 2-step hierarchical approach: our method naturally avoids repetition by extracting multiple sentences with different keypoints.12 Moreover, after applying reinforcement learning, our model performs better than the best model of See et al. (2017) and the best ML trained model of Paulus et al. (2018). Our reinforced model outperforms the ML trained rnn-ext + abs baseline with statistical significance of p < 0.01 on all metrics for both version of the dataset, indicating the effectiveness of the RL training. Also, rnn-ext + abs + RL is statistically significant better than See et al. (2017) for all metrics with p < 0.01.13 In the supplementary, we show the learning curve of our RL training, where the average reward goes up quickly after the extractor learns the End-ofExtract action and then stabilizes. For all the above models, we use standard greedy decoding and find that it performs well. Reranking and Redundancy Although the extract-then-abstract approach inherently will not generate repeating sentences like other neuraldecoders do, there might still be across-sentence redundancy because the abstractor is not aware of other extracted sentences when decoding one. Hence, we incorporate an optional reranking strategy described in Sec. 3.3. The improved ROUGE scores indicate that this successfully removes some remaining redundancies and hence produces more concise summaries. Our best abstractive 12A trivial lead-3 + abs baseline obtains ROUGE of (37.37, 15.59, 34.82), which again confirms the importance of our reinforce-based sentence selection. 13We calculate statistical significance based on the bootstrap test (Noreen, 1989; Efron and Tibshirani, 1994) with 100K samples. Output of Paulus et al. (2018) is not available so we couldn’t test for statistical significance there. Relevance Readability Total See et al. (2017) 120 128 248 rnn-ext + abs + RL + rerank 137 133 270 Equally good/bad 43 39 82 Table 4: Human Evaluation: pairwise comparison between our final model and See et al. (2017). model (rnn-ext + abs + RL + rerank) is clearly superior than the one of See et al. (2017). We are comparable on R-1 and R-2 but a 0.4 point improvement on R-L w.r.t. Paulus et al. (2018).14 We also outperform the results of Fan et al. (2017) on both original and anonymized dataset versions. Several previous works have pointed out that extractive baselines are very difficult to beat (in terms of ROUGE) by an abstractive system (See et al., 2017; Nallapati et al., 2017). Note that our best model is one of the first abstractive models to outperform the lead-3 baseline on the originaltext CNN/DM dataset. Our extractive experiment serves as a complementary analysis of the effect of RL with extractive systems. 6.3 Human Evaluation We also conduct human evaluation to ensure robustness of our training procedure. We measure relevance and readability of the summaries. Relevance is based on the summary containing important, salient information from the input article, being correct by avoiding contradictory/unrelated information, and avoiding repeated/redundant information. Readability is based on the summarys fluency, grammaticality, and coherence. To evaluate both these criteria, we design the following Amazon MTurk experiment: we randomly select 100 samples from the CNN/DM test set and ask the human testers (3 for each sample) to rank between summaries (for relevance and readability) produced by our model and that of See et al. (2017) (the models were anonymized and randomly shuffled), i.e. A is better, B is better, both are equally good/bad. Following previous work, the input article and ground truth summaries are also shown to the human participants in addition to the two model summaries.15 From the results shown in Table 4, we can see that our model is better in both relevance and readability w.r.t. See et al. (2017). 14We do not list the scores of their pure RL model because they discussed its bad readability. 15We selected human annotators that were located in the US, had an approval rate greater than 95%, and had at least 10,000 approved HITs on record. 683 Speed Models total time (hr) words / sec (See et al., 2017) 12.9 14.8 rnn-ext + abs + RL 0.68 361.3 rnn-ext + abs + RL + rerank 2.00 (1.46 +0.54) 109.8 Table 5: Speed comparison with See et al. (2017). 6.4 Speed Comparison Our two-stage extractive-abstractive hybrid model is not only the SotA on summary quality metrics, but more importantly also gives a significant speed-up in both train and test time over a strong neural abstractive system (See et al., 2017).16 Our full model is composed of a extremely fast extractor and a parallelizable abstractor, where the computation bottleneck is on the abstractor, which has to generate summaries with a large vocabulary from scratch.17 The main advantage of our abstractor at decoding time is that we can first compute all the extracted sentences for the document, and then abstract every sentence concurrently (in parallel) to generate the overall summary. In Table 5, we show the substantial test-time speed-up of our model compared to See et al. (2017).18 We calculate the total decoding time for producing all summaries for the test set.19 Due to the fact that the main test-time speed bottleneck of RNN language generation model is that the model is constrained to generate one word at a time, the total decoding time is dependent on the number of total words generated; we hence also report the decoded words per second for a fair comparison. Our model without reranking is extremely fast. From Table 5 we can see that we achieve a speed up of 18x in time and 24x in word generation rate. Even after adding the (optional) reranker, we still maintain a 6-7x speed-up (and hence a user can choose to use the reranking component depending on their downstream application’s speed requirements).20 16The only publicly available code with a pretrained model for neural summarization which we can test the speed. 17The time needed for extractor is negligible w.r.t. the abstractor because it does not require large matrix multiplication for generating every word. Moreover, with convolutional encoder at word-level made parallelizable by the hierarchical rnn-ext, our model is scalable for very long documents. 18For details of training speed-up, please see the supp. 19We time the model of See et al. (2017) using beam size of 4 (used for their best-reported scores). Without beam-search, it gets significantly worse ROUGE of (36.62, 15.12, 34.08), so we do not compare speed-ups w.r.t. that version. 20Most of the recent neural abstractive summarization systems are of similar algorithmic complexity to that of See et al. (2017). The main differences such as the training objective (ML vs. RL) and copying (soft/hard) has negligible test runtime compared to the slowest component: the long-summary Novel N-gram (%) Models 1-gm 2-gm 3-gm 4-gm See et al. (2017) 0.1 2.2 6.0 9.7 rnn-ext + abs + RL + rerank 0.3 10.0 21.7 31.6 reference summaries 10.8 47.5 68.2 78.2 Table 6: Abstractiveness: novel n-gram counts. 7 Analysis 7.1 Abstractiveness We compute an abstractiveness score (See et al., 2017) as the ratio of novel n-grams in the generated summary that are not present in the input document. The results are shown in Table 6: our model rewrites substantially more abstractive summaries than previous work. A potential reason for this is that when trained with individual sentence-pairs, the abstractor learns to drop more document words so as to write individual summary sentences as concise as human-written ones; thus the improvement in multi-gram novelty. 7.2 Qualitative Analysis on Output Examples We show examples of how our best model selects sentences and then rewrites them. In the supplementary Figure 2 and Figure 3, we can see how the abstractor rewrites the extracted sentences concisely while keeping the mentioned facts. Adding the reranker makes the output more compact globally. We observe that when rewriting longer text, the abstractor would have many facts to choose from (Figure 3 sentence 2) and this is where the reranker helps avoid redundancy across sentences. 8 Conclusion We propose a novel sentence-level RL model for abstractive summarization, which makes the model aware of the word-sentence hierarchy. Our model achieves the new state-of-the-art on both CNN/DM versions as well a better generalization on test-only DUC-2002, along with a significant speed-up in training and decoding. Acknowledgments We thank the anonymous reviewers for their helpful comments. This work was supported by a Google Faculty Research Award, a Bloomberg Data Science Research Grant, an IBM Faculty Award, and NVidia GPU awards. attentional-decoder’s sequential generation; and this is the component that we substantially speed up via our parallel sentence decoding with sentence-selection RL. 684 References Dzmitry Bahdanau, Philemon Brakel, Kelvin Xu, Anirudh Goyal, Ryan Lowe, Joelle Pineau, Aaron C. Courville, and Yoshua Bengio. 2017. An actor-critic algorithm for sequence prediction. In ICLR. Dzmitry Bahdanau, Kyunghyun Cho, and Yoshua Bengio. 2015. Neural machine translation by jointly learning to align and translate. In ICLR. Michele Banko, Vibhu O. Mittal, and Michael J. Witbrock. 2000. Headline generation based on statistical translation. In Proceedings of the 38th Annual Meeting on Association for Computational Linguistics, ACL ’00, pages 318–325, Stroudsburg, PA, USA. Association for Computational Linguistics. Irwan Bello, Hieu Pham, Quoc V. Le, Mohammad Norouzi, and Samy Bengio. 2017. Neural combinatorial optimization with reinforcement learning. arXiv preprint 1611.09940. Taylor Berg-Kirkpatrick, Dan Gillick, and Dan Klein. 2011. Jointly learning to extract and compress. In Proceedings of the 49th Annual Meeting of the Association for Computational Linguistics: Human Language Technologies - Volume 1, HLT ’11, pages 481–490, Stroudsburg, PA, USA. Association for Computational Linguistics. Lidong Bing, Piji Li, Yi Liao, Wai Lam, Weiwei Guo, and Rebecca J. Passonneau. 2015. Abstractive multi-document summarization via phrase selection and merging. In ACL. Asli C¸ elikyilmaz, Antoine Bosselut, Xiaodong He, and Yejin Choi. 2018. Deep communicating agents for abstractive summarization. NAACL-HLT. Qian Chen, Xiaodan Zhu, Zhenhua Ling, Si Wei, and Hui Jiang. 2016. Distraction-based neural networks for modeling documents. In IJCAI. Jianpeng Cheng and Mirella Lapata. 2016. Neural summarization by extracting sentences and words. In Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 484–494, Berlin, Germany. Association for Computational Linguistics. Eunsol Choi, Daniel Hewlett, Jakob Uszkoreit, Illia Polosukhin, Alexandre Lacoste, and Jonathan Berant. 2017. Coarse-to-fine question answering for long documents. In Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 209–220. Association for Computational Linguistics. Sumit Chopra, Michael Auli, and Alexander M. Rush. 2016. Abstractive sentence summarization with attentive recurrent neural networks. In Proceedings of the 2016 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 93–98, San Diego, California. Association for Computational Linguistics. James Clarke and Mirella Lapata. 2010. Discourse constraints for document compression. Computational Linguistics, 36(3):411–441. Michael Denkowski and Alon Lavie. 2014. Meteor universal: Language specific translation evaluation for any target language. In Proceedings of the EACL 2014 Workshop on Statistical Machine Translation. Bradley Efron and Robert J Tibshirani. 1994. An introduction to the bootstrap. CRC press. Angela Fan, David Grangier, and Michael Auli. 2017. Controllable abstractive summarization. arXiv preprint, abs/1711.05217. Katja Filippova, Enrique Alfonseca, Carlos Colmenares, Lukasz Kaiser, and Oriol Vinyals. 2015. Sentence compression by deletion with lstms. In Proceedings of the 2015 Conference on Empirical Methods in Natural Language Processing (EMNLP’15). Dan Gillick and Benoit Favre. 2009. A scalable global model for summarization. In Proceedings of the Workshop on Integer Linear Programming for Natural Langauge Processing, ILP ’09, pages 10–18, Stroudsburg, PA, USA. Association for Computational Linguistics. Jiatao Gu, Kyunghyun Cho, and Victor O. K. Li. 2017a. Trainable greedy decoding for neural machine translation. In EMNLP. Jiatao Gu, Zhengdong Lu, Hang Li, and Victor O.K. Li. 2016. Incorporating copying mechanism in sequence-to-sequence learning. In Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 1631–1640, Berlin, Germany. Association for Computational Linguistics. Jiatao Gu, Graham Neubig, Kyunghyun Cho, and Victor O. K. Li. 2017b. Learning to translate in realtime with neural machine translation. In EACL. Sebastian Henß, Margot Mieskes, and Iryna Gurevych. 2015. A reinforcement learning approach for adaptive single- and multi-document summarization. In International Conference of the German Society for Computational Linguistics and Language Technology (GSCL-2015), pages 3–12. Karl Moritz Hermann, Tom´aˇs Koˇcisk´y, Edward Grefenstette, Lasse Espeholt, Will Kay, Mustafa Suleyman, and Phil Blunsom. 2015. Teaching machines to read and comprehend. In Advances in Neural Information Processing Systems (NIPS). Tsutomu Hirao, Yasuhisa Yoshida, Masaaki Nishino, Norihito Yasuda, and Masaaki Nagata. 2013. Single-document summarization as a tree knapsack problem. In Proceedings of the 2013 Conference on Empirical Methods in Natural Language Processing, pages 1515–1520, Seattle, Washington, USA. Association for Computational Linguistics. 685 Sepp Hochreiter and J¨urgen Schmidhuber. 1997. Long short-term memory. Neural Comput., 9(9):1735– 1780. Hongyan Jing and Kathleen R. McKeown. 2000. Cut and paste based text summarization. In Proceedings of the 1st North American Chapter of the Association for Computational Linguistics Conference, NAACL 2000, pages 178–185, Stroudsburg, PA, USA. Association for Computational Linguistics. Yuta Kikuchi, Tsutomu Hirao, Hiroya Takamura, Manabu Okumura, and Masaaki Nagata. 2014. Single document summarization based on nested tree structure. In Proceedings of the 52nd Annual Meeting of the Association for Computational Linguistics (Volume 2: Short Papers), pages 315–320, Baltimore, Maryland. Association for Computational Linguistics. Yoon Kim. 2014. Convolutional neural networks for sentence classification. In Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing (EMNLP), pages 1746–1751, Doha, Qatar. Association for Computational Linguistics. Kevin Knight and Daniel Marcu. 2000. Statisticsbased summarization - step one: Sentence compression. In Proceedings of the Seventeenth National Conference on Artificial Intelligence and Twelfth Conference on Innovative Applications of Artificial Intelligence, pages 703–710. AAAI Press. Chen Li, Yang Liu, Fei Liu, Lin Zhao, and Fuliang Weng. 2014. Improving multi-documents summarization by sentence compression based on expanded constituent parse trees. In Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing (EMNLP), pages 691–701. Association for Computational Linguistics. Jiwei Li, Will Monroe, and Dan Jurafsky. 2016. A simple, fast diverse decoding algorithm for neural generation. arXiv preprint, abs/1611.08562. Chin-Yew Lin. 2004. Rouge: A package for automatic evaluation of summaries. In Text Summarization Branches Out: Proceedings of the ACL-04 Workshop, pages 74–81, Barcelona, Spain. Association for Computational Linguistics. Jeffrey Ling and Alexander Rush. 2017. Coarse-to-fine attention models for document summarization. In Proceedings of the Workshop on New Frontiers in Summarization, pages 33–42. Association for Computational Linguistics. Annie Louis, Aravind Joshi, and Ani Nenkova. 2010. Discourse indicators for content selection in summarization. In Proceedings of the 11th Annual Meeting of the Special Interest Group on Discourse and Dialogue, SIGDIAL ’10, pages 147–156, Stroudsburg, PA, USA. Association for Computational Linguistics. Minh-Thang Luong, Hieu Pham, and Christopher D. Manning. 2015. Effective approaches to attentionbased neural machine translation. In Empirical Methods in Natural Language Processing (EMNLP), pages 1412–1421, Lisbon, Portugal. Association for Computational Linguistics. Andr´e F. T. Martins and Noah A. Smith. 2009. Summarization with a joint model for sentence extraction and compression. In Proceedings of the Workshop on Integer Linear Programming for Natural Langauge Processing, ILP ’09, pages 1–9, Stroudsburg, PA, USA. Association for Computational Linguistics. Yishu Miao and Phil Blunsom. 2016. Language as a latent variable: Discrete generative models for sentence compression. In EMNLP. Volodymyr Mnih, Adria Puigdomenech Badia, Mehdi Mirza, Alex Graves, Timothy Lillicrap, Tim Harley, David Silver, and Koray Kavukcuoglu. 2016. Asynchronous methods for deep reinforcement learning. In Proceedings of The 33rd International Conference on Machine Learning, volume 48 of Proceedings of Machine Learning Research, pages 1928– 1937, New York, New York, USA. PMLR. Ramesh Nallapati, Feifei Zhai, and Bowen Zhou. 2017. Summarunner: A recurrent neural network based sequence model for extractive summarization of documents. In AAAI Conference on Artificial Intelligence. Ramesh Nallapati, Bowen Zhou, Cicero Nogueira dos santos, Caglar Gulcehre, and Bing Xiang. 2016. Abstractive text summarization using sequence-tosequence rnns and beyond. In CoNLL. Shashi Narayan, Shay B. Cohen, and Mirella Lapata. 2018. Ranking sentences for extractive summarization with reinforcement learning. NAACL-HLT. Eric W Noreen. 1989. Computer-intensive methods for testing hypotheses. Wiley New York. Romain Paulus, Caiming Xiong, and Richard Socher. 2018. A deep reinforced model for abstractive summarization. In ICLR. Xian Qian and Yang Liu. 2013. Fast joint compression and summarization via graph cuts. In Proceedings of the 2013 Conference on Empirical Methods in Natural Language Processing, pages 1492–1502, Seattle, Washington, USA. Association for Computational Linguistics. Marc’Aurelio Ranzato, Sumit Chopra, Michael Auli, and Wojciech Zaremba. 2016. Sequence level training with recurrent neural networks. In ICLR. Alexander M. Rush, Sumit Chopra, and Jason Weston. 2015. A neural attention model for abstractive sentence summarization. In Proceedings of the 2015 Conference on Empirical Methods in Natural Language Processing, pages 379–389, Lisbon, Portugal. Association for Computational Linguistics. 686 Mike Schuster, Kuldip K. Paliwal, and A. General. 1997. Bidirectional recurrent neural networks. IEEE Transactions on Signal Processing. Abigail See, Peter J. Liu, and Christopher D. Manning. 2017. Get to the point: Summarization with pointergenerator networks. In Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 1073– 1083. Association for Computational Linguistics. Jun Suzuki and Masaaki Nagata. 2016. Rnn-based encoder-decoder approach with word frequency estimation. In EACL. Swabha Swayamdipta, Ankur P. Parikh, and Tom Kwiatkowski. 2017. Multi-mention learning for reading comprehension with neural cascades. arXiv preprint, abs/1711.00894. Chuanqi Tan, Furu Wei, Nan Yang, Weifeng Lv, and Ming Zhou. 2018. S-net: From answer extraction to answer generation for machine reading comprehension. In AAAI. Jiwei Tan, Xiaojun Wan, and Jianguo Xiao. 2017. Abstractive document summarization with a graphbased attentional neural model. In ACL. Oriol Vinyals, Samy Bengio, and Manjunath Kudlur. 2016. Order matters: Sequence to sequence for sets. In International Conference on Learning Representations (ICLR). Oriol Vinyals, Meire Fortunato, and Navdeep Jaitly. 2015. Pointer networks. In C. Cortes, N. D. Lawrence, D. D. Lee, M. Sugiyama, and R. Garnett, editors, Advances in Neural Information Processing Systems 28, pages 2692–2700. Curran Associates, Inc. Xun Wang, Yasuhisa Yoshida, Tsutomu Hirao, Katsuhito Sudoh, and Masaaki Nagata. 2015. Summarization based on task-oriented discourse parsing. IEEE/ACM Trans. Audio, Speech and Lang. Proc., 23(8):1358–1367. Ronald J. Williams. 1992. Simple statistical gradientfollowing algorithms for connectionist reinforcement learning. Mach. Learn., 8(3-4):229–256. David Zajic, Bonnie Dorr, and Richard Schwartz. 2004. Bbn/umd at duc-2004: Topiary. In HLT-NAACL 2004 Document Understanding Workshop, pages 112–119, Boston, Massachusetts. Qingyu Zhou, Nan Yang, Furu Wei, and Ming Zhou. 2017. Selective encoding for abstractive sentence summarization. In Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 1095– 1104. Association for Computational Linguistics.
2018
63
Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Long Papers), pages 687–697 Melbourne, Australia, July 15 - 20, 2018. c⃝2018 Association for Computational Linguistics 687 Soft Layer-Specific Multi-Task Summarization with Entailment and Question Generation Han Guo∗ Ramakanth Pasunuru∗ Mohit Bansal UNC Chapel Hill {hanguo, ram, mbansal}@cs.unc.edu Abstract An accurate abstractive summary of a document should contain all its salient information and should be logically entailed by the input document. We improve these important aspects of abstractive summarization via multi-task learning with the auxiliary tasks of question generation and entailment generation, where the former teaches the summarization model how to look for salient questioning-worthy details, and the latter teaches the model how to rewrite a summary which is a directed-logical subset of the input document. We also propose novel multitask architectures with high-level (semantic) layer-specific sharing across multiple encoder and decoder layers of the three tasks, as well as soft-sharing mechanisms (and show performance ablations and analysis examples of each contribution). Overall, we achieve statistically significant improvements over the state-ofthe-art on both the CNN/DailyMail and Gigaword datasets, as well as on the DUC2002 transfer setup. We also present several quantitative and qualitative analysis studies of our model’s learned saliency and entailment skills. 1 Introduction Abstractive summarization is the challenging NLG task of compressing and rewriting a document into a short, relevant, salient, and coherent summary. It has numerous applications such as summarizing storylines, event understanding, etc. As compared to extractive or compressive summarization (Jing and McKeown, 2000; Knight and ∗Equal contribution. Marcu, 2002; Clarke and Lapata, 2008; Filippova et al., 2015; Henß et al., 2015), abstractive summaries are based on rewriting as opposed to selecting. Recent end-to-end, neural sequence-tosequence models and larger datasets have allowed substantial progress on the abstractive task, with ideas ranging from copy-pointer mechanism and redundancy coverage, to metric reward based reinforcement learning (Rush et al., 2015; Chopra et al., 2016; Nallapati et al., 2016; See et al., 2017). Despite these strong recent advancements, there is still a lot of scope for improving the summary quality generated by these models. A good rewritten summary is one that contains all the salient information from the document, is logically followed (entailed) by it, and avoids redundant information. The redundancy aspect was addressed by coverage models (Suzuki and Nagata, 2016; Chen et al., 2016; Nallapati et al., 2016; See et al., 2017), but we still need to teach these models about how to better detect salient information from the input document, as well as about better logicallydirected natural language inference skills. In this work, we improve abstractive text summarization via soft, high-level (semantic) layerspecific multi-task learning with two relevant auxiliary tasks. The first is that of document-toquestion generation, which teaches the summarization model about what are the right questions to ask, which in turn is directly related to what the salient information in the input document is. The second auxiliary task is a premise-to-entailment generation task to teach it how to rewrite a summary which is a directed-logical subset of (i.e., logically follows from) the input document, and contains no contradictory or unrelated information. For the question generation task, we use the SQuAD dataset (Rajpurkar et al., 2016), where we learn to generate a question given a sentence containing the answer, similar to the recent work 688 by Du et al. (2017). Our entailment generation task is based on the recent SNLI classification dataset and task (Bowman et al., 2015), converted to a generation task (Pasunuru and Bansal, 2017). Further, we also present novel multi-task learning architectures based on multi-layered encoder and decoder models, where we empirically show that it is substantially better to share the higherlevel semantic layers between the three aforementioned tasks, while keeping the lower-level (lexico-syntactic) layers unshared. We also explore different ways to optimize the shared parameters and show that ‘soft’ parameter sharing achieves higher performance than hard sharing. Empirically, our soft, layer-specific sharing model with the question and entailment generation auxiliary tasks achieves statistically significant improvements over the state-of-the-art on both the CNN/DailyMail and Gigaword datasets. It also performs significantly better on the DUC2002 transfer setup, demonstrating its strong generalizability as well as the importance of auxiliary knowledge in low-resource scenarios. We also report improvements on our auxiliary question and entailment generation tasks over their respective previous state-of-the-art. Moreover, we significantly decrease the training time of the multitask models by initializing the individual tasks from their pretrained baseline models. Finally, we present human evaluation studies as well as detailed quantitative and qualitative analysis studies of the improved saliency detection and logical inference skills learned by our multi-task model. 2 Related Work Automatic text summarization has been progressively improving over the time, initially more focused on extractive and compressive models (Jing and McKeown, 2000; Knight and Marcu, 2002; Clarke and Lapata, 2008; Filippova et al., 2015; Kedzie et al., 2015), and moving more towards compressive and abstractive summarization based on graphs and concept maps (Giannakopoulos, 2009; Ganesan et al., 2010; Falke and Gurevych, 2017) and discourse trees (Gerani et al., 2014), syntactic parse trees (Cheung and Penn, 2014; Wang et al., 2013), and Abstract Meaning Representations (AMR) (Liu et al., 2015; Dohare and Karnick, 2017). Recent work has also adopted machine translation inspired neural seq2seq models for abstractive summarization with advances in hierarchical, distractive, saliency, and graphattention modeling (Rush et al., 2015; Chopra et al., 2016; Nallapati et al., 2016; Chen et al., 2016; Tan et al., 2017). Paulus et al. (2018) and Henß et al. (2015) incorporated recent advances from reinforcement learning. Also, See et al. (2017) further improved results via pointercopy mechanism and addressed the redundancy with coverage mechanism. Multi-task learning (MTL) is a useful paradigm to improve the generalization performance of a task with related tasks while sharing some common parameters/representations (Caruana, 1998; Argyriou et al., 2007; Kumar and Daum´e III, 2012). Several recent works have adopted MTL in neural models (Luong et al., 2016; Misra et al., 2016; Hashimoto et al., 2017; Pasunuru and Bansal, 2017; Ruder et al., 2017; Kaiser et al., 2017). Moreover, some of the above works have investigated the use of shared vs unshared sets of parameters. On the other hand, we investigate the importance of soft parameter sharing and highlevel versus low-level layer-specific sharing. Our previous workshop paper (Pasunuru et al., 2017) presented some preliminary results for multi-task learning of textual summarization with entailment generation. This current paper has several major differences: (1) We present question generation as an additional effective auxiliary task to enhance the important complementary aspect of saliency detection; (2) Our new high-level layer-specific sharing approach is significantly better than alternative layer-sharing approaches (including the decoder-only sharing by Pasunuru et al. (2017)); (3) Our new soft sharing parameter approach gives stat. significant improvements over hard sharing; (4) We propose a useful idea of starting multi-task models from their pretrained baselines, which significantly speeds up our experiment cycle1; (5) For evaluation, we show diverse improvements of our soft, layer-specific MTL model (over state-of-the-art pointer+coverage baselines) on the CNN/DailyMail, Gigaword, as well as DUC datasets; we also report human evaluation plus analysis examples of learned saliency and entailment skills; we also report improvements on the auxiliary question and entailment generation tasks over their respective previous state-of-the-art. 1About 4-5 days for Pasunuru et al. (2017) approach vs. only 10 hours for us. This will allow the community to try many more multi-task training and tuning ideas faster. 689 In our work, we use a question generation task to improve the saliency of abstractive summarization in a multi-task setting. Using the SQuAD dataset (Rajpurkar et al., 2016), we learn to generate a question given the sentence containing the answer span in the comprehension (similar to Du et al. (2017)). For the second auxiliary task of entailment generation, we use the generation version of the RTE classification task (Dagan et al., 2006; Lai and Hockenmaier, 2014; Jimenez et al., 2014; Bowman et al., 2015). Some previous work has explored the use of RTE for redundancy detection in summarization by modeling graph-based relationships between sentences to select the most non-redundant sentences (Mehdad et al., 2013; Gupta et al., 2014), whereas our approach is based on multi-task learning. 3 Models First, we introduce our pointer+coverage baseline model and then our two auxiliary tasks: question generation and entailment generation (and finally the multi-task learning models in Sec. 4). 3.1 Baseline Pointer+Coverage Model We use a sequence-attention-sequence model with a 2-layer bidirectional LSTM-RNN encoder and a 2-layer uni-directional LSTM-RNN decoder, along with Bahdanau et al. (2015) style attention. Let x = {x1, x2, ..., xm} be the source document and y = {y1, y2, ..., yn} be the target summary. The output summary generation vocabulary distribution conditioned over the input source document is Pv(y|x; θ) = Qn t=1 pv(yt|y1:t−1, x; θ). Let the decoder hidden state be st at time step t and let ct be the context vector which is defined as a weighted combination of encoder hidden states. We concatenate the decoder’s (last) RNN layer hidden state st and context vector ct and apply a linear transformation, and then project to the vocabulary space by another linear transformation. Finally, the conditional vocabulary distribution at each time step t of the decoder is defined as: pv(yt|y1:t−1, x; θ) = sfm(Vp(Wf[st; ct]+bf)+bp) (1) where, Wf, Vp, bf, bp are trainable parameters, and sfm(·) is the softmax function. Pointer-Generator Networks Pointer mechanism (Vinyals et al., 2015) helps in directly copying the words from the source sequence during target sequence generation, which is a good fit for a task like summarization. Our pointer mechanism approach is similar to See et al. (2017), who use a soft switch based on the generation probability pg = σ(Wgct+Ugst+Vgewt−1+bg), where σ(·) is a sigmoid function, Wg, Ug, Vg and bg are parameters learned during training. ewt−1 is the previous time step output word embedding. The final word distribution is Pf(y) = pg·Pv(y)+(1−pg)·Pc(y), where Pv vocabulary distribution is as shown in Eq. 1, and copy distribution Pc is based on the attention distribution over source document words. Coverage Mechanism Following previous work (See et al., 2017), coverage helps alleviate the issue of word repetition while generating long summaries. We maintain a coverage vector ˆct = Pt−1 t=0 αt that sums over all of the previous time steps attention distributions αt, and this is added as input to the attention mechanism. Coverage loss is Lcov(θ) = P t P i min(αt,i, ˆct,i). Finally, the total loss is a weighted combination of cross-entropy loss and coverage loss: L(θ) = −log Pf(y) + λLcov(θ) (2) where λ is a tunable hyperparameter. 3.2 Two Auxiliary Tasks Despite the strengths of the strong model described above with attention, pointer, and coverage, a good summary should also contain maximal salient information and be a directed logical entailment of the source document. We teach these skills to the abstractive summarization model via multi-task training with two related auxiliary tasks: question generation task and entailment generation. Question Generation The task of question generation is to generate a question from a given input sentence, which in turn is related to the skill of being able to find the important salient information to ask questions about. First the model has to identify the important information present in the given sentence, then it has to frame (generate) a question based on this salient information, such that, given the sentence and the question, one has to be able to predict the correct answer (salient information in this case). A good summary should also be able to find and extract all the salient information in the given source document, and hence we incorporate such capabilities into our abstractive text summarization model by multi-task 690 learning it with a question generation task, sharing some common parameters/representations (see more details in Sec. 4). For setting up the question generation task, we follow Du et al. (2017) and use the SQuAD dataset to extract sentencequestion pairs. Next, we use the same sequenceto-sequence model architecture as our summarization model. Note that even though our question generation task is generating one question at a time2, our multi-task framework (see Sec. 4) is set up in such a way that the sentence-level knowledge from this auxiliary task can help the documentlevel primary (summarization) task to generate multiple salient facts – by sharing high-level semantic layer representations. See Sec. 7 and Table 10 for a quantitative evaluation showing that the multi-task model can find multiple (and more) salient phrases in the source document. Also see Sec. 7 (and supp) for challenging qualitative examples where baseline and SotA models only recover a small subset of salient information but our multi-task model with question generation is able to detect more of the important information. Entailment Generation The task of entailment generation is to generate a hypothesis which is entailed by (or logically follows from) the given premise as input. In summarization, the generation decoder also needs to generate a summary that is entailed by the source document, i.e., does not contain any contradictory or unrelated/extraneous information as compared to the input document. We again incorporate such inference capabilities into the summarization model via multi-task learning, sharing some common representations/parameters between our summarization and entailment generation model (more details in Sec. 4). For this task, we use the entailmentlabeled pairs from the SNLI dataset (Bowman et al., 2015) and set it up as a generation task (using the same strong model architecture as our abstractive summarization model). See Sec. 7 and Table 9 for a quantitative evaluation showing that the multi-task model is better entailed by the source document and has fewer extraneous facts. Also see Sec. 7 and supplementary for qualitative examples of how our multi-task model with the entailment auxiliary task is able to generate more logically-entailed summaries than the baseline and 2We also tried to generate all the questions at once from the full document, but we obtained low accuracy because of this task’s challenging nature and overall less training data. QG ENCODER SG ENCODER EG ENCODER QG DECODER SG DECODER EG DECODER ATTENTION DISTRIBUTION UNSHARED ENCODER LAYER 1 SHARED ENCODER LAYER 2 SHARED DECODER LAYER 1 UNSHARED DECODER LAYER 2 SHARED ATTENTION Figure 1: Overview of our multi-task model with parallel training of three tasks: abstractive summary generation (SG), question generation (QG), and entailment generation (EG). We share the ‘blue’ color representations across all the three tasks, i.e., second layer of encoder, attention parameters, and first layer of decoder. SotA models, which instead produce extraneous, unrelated words not present (in any paraphrased form) in the source document. 4 Multi-Task Learning We employ multi-task learning for parallel training of our three tasks: abstractive summarization, question generation, and entailment generation. In this section, we describe our novel layerspecific, soft-sharing approaches and other multitask learning details. 4.1 Layer-Specific Sharing Mechanism Simply sharing all parameters across the related tasks is not optimal, because models for different tasks have different input and output distributions, esp. for low-level vs. high-level parameters. Therefore, related tasks should share some common representations (e.g., high-level information), as well as need their own individual task-specific representations (esp. low-level information). To this end, we allow different components of model parameters of related tasks to be shared vs. unshared, as described next. Encoder Layer Sharing: Belinkov et al. (2017) observed that lower layers (i.e., the layers closer to the input words) of RNN cells in a seq2seq 691 machine translation model learn to represent word structure, while higher layers (farther from input) are more focused on high-level semantic meanings (similar to findings in the computer vision community for image features (Zeiler and Fergus, 2014)). We believe that while textual summarization, question generation, and entailment generation have different training data distributions and low-level representations, they can still benefit from sharing their models’ high-level components (e.g., those that capture the skills of saliency and inference). Thus, we keep the lower-level layer (i.e., first layer closer to input words) of the 2layer encoder of all three tasks unshared, while we share the higher layer (second layer in our model as shown in Fig. 1) across the three tasks. Decoder Layer Sharing: Similarly for the decoder, lower layers (i.e., the layers closer to the output words) learn to represent word structure for generation, while higher layers (farther from output) are more focused on high-level semantic meaning. Hence, we again share the higher level components (first layer in the decoder far from output as show in Fig. 1), while keeping the lower layer (i.e., second layer) of decoders of all three tasks unshared. Attention Sharing: As described in Sec. 3.1, the attention mechanism defines an attention distribution over high-level layer encoder hidden states and since we share the second, high-level (semantic) layer of all the encoders, it is intuitive to share the attention parameters as well. 4.2 Soft vs. Hard Parameter Sharing Hard-sharing: In the most common multi-task learning hard-sharing approach, the parameters to be shared are forced to be the same. As a result, gradient information from multiple tasks will directly pass through shared parameters, hence forcing a common space representation for all the related tasks. Soft-sharing: In our soft-sharing approach, we encourage shared parameters to be close in representation space by penalizing their l2 distances. Unlike hard sharing, this approach gives more flexibility for the tasks by only loosely coupling the shared space representations. We minimize the following loss function for the primary task in soft-sharing approach: L(θ) = −log Pf(y)+λLcov(θ)+γ∥θs−ψs∥(3) where γ is a hyperparameter, θ represents the primary summarization task’s full parameters, while θs and ψs represent the shared parameter subset between the primary and auxiliary tasks. 4.3 Fast Multi-Task Training During multi-task learning, we alternate the minibatch optimization of the three tasks, based on a tunable ‘mixing ratio’ αs : αq : αe; i.e., optimizing the summarization task for αs mini-batches followed by optimizing the question generation task for αq mini-batches, followed by entailment generation task for αe mini-batches (and for 2way versions of this, we only add one auxiliary task at a time). We continue this process until all the models converge. Also, importantly, instead of training from scratch, we start the primary task (summarization) from a 90%-converged model of its baseline to make the training process faster. We observe that starting from a fully-converged baseline makes the model stuck in a local minimum. In addition, we also start all auxiliary models from their 90%-converged baselines, as we found that starting the auxiliary models from scratch has a chance to pull the primary model’s shared parameters towards randomly-initialized auxiliary model’s shared parameters. 5 Experimental Setup Datasets: We use CNN/DailyMail dataset (Hermann et al., 2015; Nallapati et al., 2016) and Gigaword (Rush et al., 2015) datasets for summarization, and the Stanford Natural Language Inference (SNLI) corpus (Bowman et al., 2015) and the Stanford Question Answering Dataset (SQuAD) (Rajpurkar et al., 2016) datasets for our entailment and question generation tasks, resp. We also show generalizability/transfer results on DUC-2002 with our CNN/DM trained models. Supplementary contains dataset details. Evaluation Metrics: We use the standard ROUGE evaluation package (Lin, 2004) for reporting the results on all of our summarization models. Following previous work (Chopra et al., 2016; Nallapati et al., 2016), we use ROUGE full-length F1 variant for all our results. Following See et al. (2017), we also report METEOR (Denkowski and Lavie, 2014) using the MS-COCO evaluation script (Chen et al., 2015). Human Evaluation Criteria: We used Amazon MTurk to perform human evaluation of summary relevance and readability. We selected human annotators that were located in the US, had an ap692 Models ROUGE-1 ROUGE-2 ROUGE-L METEOR PREVIOUS WORK Seq2Seq(50k vocab) (See et al., 2017) 31.33 11.81 28.83 12.03 Pointer (See et al., 2017) 36.44 15.66 33.42 15.35 Pointer+Coverage (See et al., 2017) ⋆ 39.53 17.28 36.38 18.72 Pointer+Coverage (See et al., 2017) † 38.82 16.81 35.71 18.14 OUR MODELS Two-Layer Baseline (Pointer+Coverage) ⊗ 39.56 17.52 36.36 18.17 ⊗+ Entailment Generation 39.84 17.63 36.54 18.61 ⊗+ Question Generation 39.73 17.59 36.48 18.33 ⊗+ Entailment Gen. + Question Gen. 39.81 17.64 36.54 18.54 Table 1: CNN/DailyMail summarization results. ROUGE scores are full length F-1 (as previous work). All the multi-task improvements are statistically significant over the state-of-the-art baseline. Models R-1 R-2 R-L PREVIOUS WORK ABS+ (Rush et al., 2015) 29.76 11.88 26.96 RAS-El (Chopra et al., 2016) 33.78 15.97 31.15 lvt2k (Nallapati et al., 2016) 32.67 15.59 30.64 Pasunuru et al. (2017) 32.75 15.35 30.82 OUR MODELS 2-Layer Pointer Baseline ⊗ 34.26 16.40 32.03 ⊗+ Entailment Generation 35.45 17.16 33.19 ⊗+ Question Generation 35.48 17.31 32.97 ⊗+ Entailment + Question 35.98 17.76 33.63 Table 2: Summarization results on Gigaword. ROUGE scores are full length F-1. proval rate greater than 95%, and had at least 10,000 approved HITs. For the pairwise model comparisons discussed in Sec. 6.2, we showed the annotators the input article, the ground truth summary, and the two model summaries (randomly shuffled to anonymize model identities) – we then asked them to choose the better among the two model summaries or choose ‘Not-Distinguishable’ if both summaries are equally good/bad. Instructions for relevance were defined based on the summary containing salient/important information from the given article, being correct (i.e., avoiding contradictory/unrelated information), and avoiding redundancy. Instructions for readability were based on the summary’s fluency, grammaticality, and coherence. Training Details All our soft/hard and layerspecific sharing decisions were made on the validation/development set. Details of RNN hidden state sizes, Adam optimizer, mixing ratios, etc. are provided in the supplementary for reproducibility. 6 Results 6.1 Summarization (Primary Task) Results Pointer+Coverage Baseline We start from the strong model of See et al. (2017).3 Table 1 shows 3We use two layers so as to allow our high- versus lowlevel layer sharing intuition. Note that this does not increase that our baseline model performs better than or comparable to See et al. (2017).4 On Gigaword dataset, our baseline model (with pointer only, since coverage not needed for this single-sentence summarization task) performs better than all previous works, as shown in Table 2. Multi-Task with Entailment Generation We first perform multi-task learning between abstractive summarization and entailment generation with soft-sharing of parameters as discussed in Sec. 4. Table 1 and Table 2 shows that this multi-task setting is better than our strong baseline models and the improvements are statistically significant on all metrics5 on both CNN/DailyMail (p < 0.01 in ROUGE-1/ROUGE-L/METEOR and p < 0.05 in ROUGE-2) and Gigaword (p < 0.01 on all metrics) datasets, showing that entailment generation task is inducing useful inference skills to the summarization task (also see analysis examples in Sec. 7). Multi-Task with Question Generation For multi-task learning with question generation, the improvements are statistically significant in ROUGE-1 (p < 0.01), ROUGE-L (p < 0.05), and METEOR (p < 0.01) for CNN/DailyMail and in all metrics (p < 0.01) for Gigaword, compared to the respective baseline models. Also, Sec. 7 presents quantitative and qualitative analysis of this model’s improved saliency.6 the parameter size much (23M versus 22M for See et al. (2017)). 4As mentioned in the github for See et al. (2017), their publicly released pretrained model produces the lower scores that we represent by † in Table 1. 5Stat. significance is computed via bootstrap test (Noreen, 1989; Efron and Tibshirani, 1994) with 100K samples. 6In order to verify that our improvements were from the auxiliary tasks’ specific character/capabilities and not just due to adding more data, we separately trained word embeddings on each auxiliary dataset (i.e., SNLI and SQuAD) and incorporated them into the summarization model. We found that both our 2-way multi-task models perform sig693 Models Relevance Readability Total MTL VS. BASELINE MTL wins 43 40 83 Baseline wins 22 24 46 Non-distinguish. 35 36 71 MTL VS. SEE ET AL. (2017) MTL wins 39 33 72 See (2017) wins 29 38 67 Non-distinguish. 32 29 61 Table 3: CNN/DM Human Evaluation: pairwise comparison between our 3-way multi-task (MTL) model w.r.t. our baseline and See et al. (2017). Models Relevance Readability Total MTL wins 33 32 65 Baseline wins 22 22 44 Non-distinguish. 45 46 91 Table 4: Gigaword Human Evaluation: pairwise comparison between our 3-way multi-task (MTL) model w.r.t. our baseline. Multi-Task with Entailment and Question Generation Finally, we perform multi-task learning with all three tasks together, achieving the best of both worlds (inference skills and saliency). Table 1 and Table 2 show that our full multi-task model achieves the best scores on CNN/DailyMail and Gigaword datasets, and the improvements are statistically significant on all metrics on both CNN/DailyMail (p < 0.01 in ROUGE1/ROUGE-L/METEOR and p < 0.02 in ROUGE2) and Gigaword (p < 0.01 on all metrics). Finally, our 3-way multi-task model (with both entailment and question generation) outperforms the publicly-available pretrained result (†) of the previous SotA (See et al., 2017) with stat. significance (p < 0.01), as well the higher-reported results (⋆) on ROUGE-1/ROUGE-2 (p < 0.01). 6.2 Human Evaluation We also conducted a blind human evaluation on Amazon MTurk for relevance and readability, based on 100 samples, for both CNN/DailyMail and Gigaword (see instructions in Sec. 5). Table. 3 shows the CNN/DM results where we do pairwise comparison between our 3-way multi-task model’s output summaries w.r.t. our baseline summaries and w.r.t. See et al. (2017) summaries. As shown, our 3-way multi-task model achieves both higher relevance and higher readability scores w.r.t. the baseline. W.r.t. See et al. (2017), our MTL model is higher in relevance scores but a bit lower in nificantly better than these models using the auxiliary wordembeddings, suggesting that merely adding more data in not enough. Models R-1 R-2 R-L See et al. (2017) 34.30 14.25 30.82 Baseline 35.96 15.91 32.92 Multi-Task (EG + QG) 36.73 16.15 33.58 Table 5: ROUGE F1 scores on DUC-2002. readability scores (and is higher in terms of total aggregate scores). One potential reason for this lower readability score is that our entailment generation auxiliary task encourages our summarization model to rewrite more and to be more abstractive than See et al. (2017) – see abstractiveness results in Table 11. We also show human evaluation results on the Gigaword dataset in Table 4 (again based on pairwise comparisons for 100 samples), where we see that our MTL model is better than our state-of-theart baseline on both relevance and readability.7 6.3 Generalizability Results (DUC-2002) Next, we also tested our model’s generalizability/transfer skills, where we take the models trained on CNN/DailyMail and directly test them on DUC-2002. We take our baseline and 3way multi-task models, plus the pointer-coverage model from See et al. (2017).8 We only retune the beam-size for each of these three models separately (based on DUC-2003 as the validation set).9 As shown in Table 5, our multitask model achieves statistically significant improvements over the strong baseline (p < 0.01 in ROUGE-1 and ROUGE-L) and the pointercoverage model from See et al. (2017) (p < 0.01 in all metrics). This demonstrates that our model is able to generalize well and that the auxiliary knowledge helps more in low-resource scenarios. 6.4 Auxiliary Task Results In this section, we discuss the individual/separated performance of our auxiliary tasks. Entailment Generation We use the same architecture as described in Sec. 3.1 with pointer mech7Note that we did not have output files of any previous work’s model on Gigaword; however, our baseline is already a strong state-of-the-art model as shown in Table 2. 8We use the publicly-available pretrained model from See et al. (2017)’s github for these DUC transfer results, which produces the † results in Table 1. All other comparisons and analysis in our paper are based on their higher ⋆results. 9We follow previous work which has shown that larger beam values are better and feasible for DUC corpora. However, our MTL model still achieves stat. significant improvements (p < 0.01 in all metrics) over See et al. (2017) without beam retuning (i.e., with beam = 4). 694 Models M C R B Pasunuru&Bansal (2017) 29.6 117.8 62.4 40.6 Our 1-layer pointer EG 32.4 139.3 65.1 43.6 Our 2-layer pointer EG 32.3 140.0 64.4 43.7 Table 6: Performance of our pointer-based entailment generation (EG) models compared with previous SotA work. M, C, R, B are short for Meteor, CIDEr-D, ROUGE-L, and BLEU-4, resp. Models M C R B Du et al. (2017) 15.2 38.0 10.8 Our 1-layer pointer QG 15.4 75.3 36.2 9.2 Our 2-layer pointer QG 17.5 95.3 40.1 13.8 Table 7: Performance of our pointer-based question generation (QG) model w.r.t. previous work. anism, and Table 6 compares our model’s performance to Pasunuru and Bansal (2017). Our pointer mechanism gives a performance boost, since the entailment generation task involves copying from the given premise sentence, whereas the 2-layer model seems comparable to the 1-layer model. Also, the supplementary shows some output examples from our entailment generation model. Question Generation Again, we use same architecture as described in Sec. 3.1 along with pointer mechanism for the task of question generation. Table 7 compares the performance of our model w.r.t. the state-of-the-art Du et al. (2017). Also, the supplementary shows some output examples from our question generation model. 7 Ablation and Analysis Studies Soft-sharing vs. Hard-sharing As described in Sec. 4.2, we choose soft-sharing over hard-sharing because of the more expressive parameter sharing it provides to the model. Empirical results in Table. 8 prove that soft-sharing method is statistically significantly better than hard-sharing with p < 0.001 in all metrics.10 Comparison of Different Layer-Sharing Methods We also conducted ablation studies among various layer-sharing approaches. Table 8 shows results for soft-sharing models with decoder-only sharing (D1+D2; similar to Pasunuru et al. (2017)) as well as lower-layer sharing (encoder layer 1 + decoder layer 2, with and without attention shared). As shown, our final model (high-level semantic layer sharing E2+Attn+D1) outperforms 10In the interest of space, most of the analyses are shown for CNN/DailyMail experiments, but we observed similar trends for the Gigaword experiments as well. Models R-1 R-2 R-L M Final Model 39.81 17.64 36.54 18.54 SOFT-VS.-HARD SHARING Hard-sharing 39.51 17.44 36.33 18.21 LAYER SHARING METHODS D1+D2 39.62 17.49 36.44 18.34 E1+D2 39.51 17.51 36.37 18.15 E1+Attn+D2 39.32 17.36 36.11 17.88 Table 8: Ablation studies comparing our final multi-task model with hard-sharing and different alternative layer-sharing methods. Here E1, E2, D1, D2, Attn refer to parameters of the first/second layer of encoder/decoder, and attention parameters. Improvements of final model upon ablation experiments are all stat. signif. with p < 0.05. Models Average Entailment Probability Baseline 0.907 Multi-Task (EG) 0.912 Table 9: Entailment classification results of our baseline vs. EG-multi-task model (p < 0.001). these alternate sharing methods in all metrics with statistical significance (p < 0.05).11 Quantitative Improvements in Entailment We employ a state-of-the-art entailment classifier (Chen et al., 2017), and calculate the average of the entailment probability of each of the output summary’s sentences being entailed by the input source document. We do this for output summaries of our baseline and 2-way-EG multi-task model (with entailment generation). As can be seen in Table 9, our multi-task model improves upon the baseline in the aspect of being entailed by the source document (with statistical significance p < 0.001). Further, we use the Named Entity Recognition (NER) module from CoreNLP (Manning et al., 2014) to compute the number of times the output summary contains extraneous facts (i.e., named entities as detected by the NER system) that are not present in the source documents, based on the intuition that a well-entailed summary should not contain unrelated information not followed from the input premise. We found that our 2-way MTL model with entailment generation reduces this extraneous count by 17.2% w.r.t. the baseline. The qualitative examples below further discuss this issue of generating unrelated information. Quantitative Improvements in Saliency Detection For our saliency evaluation, we used the 11Note that all our soft and layer sharing decisions were strictly made on the dev/validation set (see Sec. 5). 695 Models Average Match Rate Baseline 27.75 % Multi-Task (QG) 28.06 % Table 10: Saliency classification results of our baseline vs. QG-multi-task model (p < 0.01). Models 2-gram 3-gram 4-gram See et al. (2017) 2.24 6.03 9.72 MTL (3-way) 2.84 6.83 10.66 Table 11: Abstractiveness: novel n-gram percent. answer-span prediction classifier from Pasunuru and Bansal (2018) trained on SQuAD (Rajpurkar et al., 2016) as the keyword detection classifier. We then annotate the ground-truth and model summaries with this keyword classifier and compute the % match, i.e., how many salient words from the ground-truth summary were also generated in the model summary. The results are shown in Table 10, where the 2-way-QG MTL model (with question generation) versus baseline improvement is stat. significant (p < 0.01). Moreover, we found 93 more cases where our 2-way-QG MTL model detects 2 or more additional salient keywords than the pointer baseline model (as opposed to vice versa), showing that sentence-level question generation task is helping the document-level summarization task in finding more salient terms. Qualitative Examples on Entailment and Saliency Improvements Fig. 2 presents an example of output summaries generated by See et al. (2017), our baseline, and our 3-way multitask model. See et al. (2017) and our baseline models generate phrases like “john hartson” and “hampden injustice” that don’t appear in the input document, hence they are not entailed by the input.12 Moreover, both models missed salient information like “josh meekings”, “leigh griffiths”, and “hoops”, that our multi-task model recovers.13 Hence, our 3-way multi-task model generates summaries that are both better at logical entailment and contain more salient information. We refer to supplementary Fig. 3 for more details and similar examples for separated 2-way multi-task models (supplementary Fig. 1, Fig. 2). Abstractiveness Analysis As suggested in See et al. (2017), we also compute the abstractiveness score as the number of novel n-grams between the 12These extra, non-entailed unrelated/contradictory information are not present at all in any paraphrase form in the input document. 13We consider the fill-in-the-blank highlights annotated by human on CNN/DailyMail dataset as salient information. Input Document: celtic have written to the scottish football association in order to gain an ‘ understanding ´of the refereeing decisions during their scottish cup semi-final defeat by inverness on sunday . the hoops were left outraged by referee steven mclean ´s failure to award a penalty or red card for a clear handball in the box by josh meekings to deny leigh griffith ´s goal-bound shot during the first-half . caley thistle went on to win the game 3-2 after extra-time and denied rory delia ´s men the chance to secure a domestic treble this season . celtic striker leigh griffiths has a goal-bound shot blocked by the outstretched arm of josh meekings . celtic ´s adam matthews -lrb- right -rrb- slides in with a strong challenge on nick ross in the scottish cup semi-final . ‘ given the level of reaction from our supporters and across football , we are duty bound to seek an understanding of what actually happened , ´celtic said in a statement . they added , ‘ we have not been given any other specific explanation so far and this is simply to understand the circumstances of what went on and why such an obvious error was made . ´however , the parkhead outfit made a point of congratulating their opponents , who have reached the first-ever scottish cup final in their history , describing caley as a ‘ fantastic club ´and saying ‘ reaching the final is a great achievement . ´celtic had taken the lead in the semi-final through defender virgil van dijk ´s curling free-kick on 18 minutes , but were unable to double that lead thanks to the meekings controversy . it allowed inverness a route back into the game and celtic had goalkeeper craig gordon sent off after the restart for scything down marley watkins in the area . greg tansey duly converted the resulting penalty . edward ofere then put caley thistle ahead , only for john guidetti to draw level for the bhoys . with the game seemingly heading for penalties , david raven scored the winner on 117 minutes , breaking thousands of celtic hearts . celtic captain scott brown -lrb- left -rrb- protests to referee steven mclean but the handball goes unpunished . griffiths shows off his acrobatic skills during celtic ´s eventual surprise defeat by inverness . celtic pair aleksandar tonev -lrb- left -rrb- and john guidetti look dejected as their hopes of a domestic treble end . Ground-truth: celtic were defeated 3-2 after extra-time in the scottish cup semi-final . leigh griffiths had a goal-bound shot blocked by a clear handball. however, no action was taken against offender josh meekings . the hoops have written the sfa for an ’understanding’ of the decision . See et al. (2017): john hartson was once on the end of a major hampden injustice while playing for celtic . but he can not see any point in his old club writing to the scottish football association over the latest controversy at the national stadium . hartson had a goal wrongly disallowed for offside while celtic were leading 1-0 at the time but went on to lose 3-2 . Our Baseline: john hartson scored the late winner in 3-2 win against celtic . celtic were leading 1-0 at the time but went on to lose 3-2 . some fans have questioned how referee steven mclean and additional assistant alan muir could have missed the infringement . Multi-task: celtic have written to the scottish football association in order to gain an ‘ understanding ’ of the refereeing decisions . the hoops were left outraged by referee steven mclean ’s failure to award a penalty or red card for a clear handball in the box by josh meekings . celtic striker leigh griffiths has a goal-bound shot blocked by the outstretched arm of josh meekings . Figure 3: Example of summaries generated by See et al. (2017), our baseline, and 3-way multi-task model with summarization and both entailment generation and question generation. The boxed-red highlighted words/phrases are not present in the input source document in any paraphrasing form. All the unboxedgreen highlighted words/phrases correspond to the salient information. See detailed discussion in Fig. 1 and Fig. 2 above. As shown, the outputs from See et al. (2017) and the baseline both include nonentailed words/phrases (e.g. “john hartson”), as well as they missed salient information (“hoops”, “josh meekings”, “leigh griffiths”) in their output summaries. Our multi-task model, however, manages to accomplish both, i.e., cover more salient information and also avoid unrelated information. Figure 2: Example summary from our 3way MTL model. The boxed-red highlights are extraneously-generated words not present/paraphrased in the input document. The unboxed-green highlights show salient phrases. model output summary and source document. As shown in Table 11, our multi-task model (EG + QG) is more abstractive than See et al. (2017). 8 Conclusion We presented a multi-task learning approach to improve abstractive summarization by incorporating the ability to detect salient information and to be logically entailed by the document, via question generation and entailment generation auxiliary tasks. We propose effective soft and highlevel (semantic) layer-specific parameter sharing and achieve significant improvements over the state-of-the-art on two popular datasets, as well as a generalizability/transfer DUC-2002 setup. Acknowledgments We thank the reviewers for their helpful comments. This work was supported by DARPA (YFA17-D17AP00022), Google Faculty Research Award, Bloomberg Data Science Research Grant, and NVidia GPU awards. The views, opinions, and/or findings contained in this article are those of the authors and should not be interpreted as representing the official views or policies, either expressed or implied, of the funding agency. 696 References Andreas Argyriou, Theodoros Evgeniou, and Massimiliano Pontil. 2007. Multi-task feature learning. In NIPS. Dzmitry Bahdanau, Kyunghyun Cho, and Yoshua Bengio. 2015. Neural machine translation by jointly learning to align and translate. In ICLR. Yonatan Belinkov, Nadir Durrani, Fahim Dalvi, Hassan Sajjad, and James Glass. 2017. What do neural machine translation models learn about morphology? In ACL. Samuel R Bowman, Gabor Angeli, Christopher Potts, and Christopher D Manning. 2015. A large annotated corpus for learning natural language inference. In EMNLP. Rich Caruana. 1998. Multitask learning. In Learning to learn, pages 95–133. Springer. Qian Chen, Xiaodan Zhu, Zhen-Hua Ling, Si Wei, Hui Jiang, and Diana Inkpen. 2017. Enhanced lstm for natural language inference. In Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), volume 1, pages 1657–1668. Qian Chen, Xiaodan Zhu, Zhenhua Ling, Si Wei, and Hui Jiang. 2016. Distraction-based neural networks for modeling documents. In IJCAI. Xinlei Chen, Hao Fang, Tsung-Yi Lin, Ramakrishna Vedantam, Saurabh Gupta, Piotr Doll´ar, and C Lawrence Zitnick. 2015. Microsoft COCO captions: Data collection and evaluation server. arXiv preprint arXiv:1504.00325. Jackie Chi Kit Cheung and Gerald Penn. 2014. Unsupervised sentence enhancement for automatic summarization. In EMNLP, pages 775–786. Sumit Chopra, Michael Auli, and Alexander M Rush. 2016. Abstractive sentence summarization with attentive recurrent neural networks. In HLT-NAACL. James Clarke and Mirella Lapata. 2008. Global inference for sentence compression: An integer linear programming approach. Journal of Artificial Intelligence Research, 31:399–429. Ido Dagan, Oren Glickman, and Bernardo Magnini. 2006. The pascal recognising textual entailment challenge. In Machine learning challenges. evaluating predictive uncertainty, visual object classification, and recognising tectual entailment, pages 177– 190. Springer. Michael Denkowski and Alon Lavie. 2014. Meteor universal: Language specific translation evaluation for any target language. In EACL. Shibhansh Dohare and Harish Karnick. 2017. Text summarization using abstract meaning representation. arXiv preprint arXiv:1706.01678. Xinya Du, Junru Shao, and Claire Cardie. 2017. Learning to ask: Neural question generation for reading comprehension. In ACL. Bradley Efron and Robert J Tibshirani. 1994. An introduction to the bootstrap. CRC press. Tobias Falke and Iryna Gurevych. 2017. Bringing structure into summaries: Crowdsourcing a benchmark corpus of concept maps. In EMNLP. Katja Filippova, Enrique Alfonseca, Carlos A Colmenares, Lukasz Kaiser, and Oriol Vinyals. 2015. Sentence compression by deletion with lstms. In EMNLP, pages 360–368. Kavita Ganesan, ChengXiang Zhai, and Jiawei Han. 2010. Opinosis: a graph-based approach to abstractive summarization of highly redundant opinions. In Proceedings of the 23rd international conference on computational linguistics, pages 340–348. ACL. Shima Gerani, Yashar Mehdad, Giuseppe Carenini, Raymond T Ng, and Bita Nejat. 2014. Abstractive summarization of product reviews using discourse structure. In EMNLP, volume 14, pages 1602–1613. George Giannakopoulos. 2009. Automatic summarization from multiple documents. Ph. D. dissertation. Anand Gupta, Manpreet Kaur, Adarsh Singh, Aseem Goel, and Shachar Mirkin. 2014. Text summarization through entailment-based minimum vertex cover. Lexical and Computational Semantics (* SEM 2014), page 75. Kazuma Hashimoto, Caiming Xiong, Yoshimasa Tsuruoka, and Richard Socher. 2017. A joint many-task model: Growing a neural network for multiple nlp tasks. In EMNLP. Stefan Henß, Margot Mieskes, and Iryna Gurevych. 2015. A reinforcement learning approach for adaptive single-and multi-document summarization. In GSCL, pages 3–12. Karl Moritz Hermann, Tomas Kocisky, Edward Grefenstette, Lasse Espeholt, Will Kay, Mustafa Suleyman, and Phil Blunsom. 2015. Teaching machines to read and comprehend. In NIPS, pages 1693–1701. Sergio Jimenez, George Duenas, Julia Baquero, Alexander Gelbukh, Av Juan Dios B´atiz, and Av Mendiz´abal. 2014. UNAL-NLP: Combining soft cardinality features for semantic textual similarity, relatedness and entailment. In SemEval, pages 732– 742. Hongyan Jing and Kathleen R. McKeown. 2000. Cut and paste based text summarization. In Proceedings of the 1st North American Chapter of the Association for Computational Linguistics Conference, NAACL 2000, pages 178–185, Stroudsburg, PA, USA. Association for Computational Linguistics. 697 Lukasz Kaiser, Aidan N. Gomez, Noam Shazeer, Ashish Vaswani, Niki Parmar, Llion Jones, and Jakob Uszkoreit. 2017. One model to learn them all. CoRR, abs/1706.05137. Chris Kedzie, Kathleen McKeown, and Fernando Diaz. 2015. Predicting salient updates for disaster summarization. In Proceedings of the 53rd Annual Meeting of the Association for Computational Linguistics and the 7th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), volume 1, pages 1608–1617. Kevin Knight and Daniel Marcu. 2002. Summarization beyond sentence extraction: A probabilistic approach to sentence compression. Artificial Intelligence, 139(1):91–107. Abhishek Kumar and Hal Daum´e III. 2012. Learning task grouping and overlap in multi-task learning. In ICML. Alice Lai and Julia Hockenmaier. 2014. Illinois-lh: A denotational and distributional approach to semantics. Proc. SemEval, 2:5. Chin-Yew Lin. 2004. ROUGE: A package for automatic evaluation of summaries. In Text Summarization Branches Out: Proceedings of the ACL-04 workshop, volume 8. Fei Liu, Jeffrey Flanigan, Sam Thomson, Norman Sadeh, and Noah A Smith. 2015. Toward abstractive summarization using semantic representations. In NAACL: HLT, pages 1077–1086. Minh-Thang Luong, Quoc V. Le, Ilya Sutskever, Oriol Vinyals, and Lukasz Kaiser. 2016. Multi-task sequence to sequence learning. In ICLR. Christopher D. Manning, Mihai Surdeanu, John Bauer, Jenny Finkel, Steven J. Bethard, and David McClosky. 2014. The Stanford CoreNLP natural language processing toolkit. In Association for Computational Linguistics (ACL) System Demonstrations, pages 55–60. Yashar Mehdad, Giuseppe Carenini, Frank W Tompa, and Raymond T Ng. 2013. Abstractive meeting summarization with entailment and fusion. In Proc. of the 14th European Workshop on Natural Language Generation, pages 136–146. Ishan Misra, Abhinav Shrivastava, Abhinav Gupta, and Martial Hebert. 2016. Cross-stitch networks for multi-task learning. In CVPR, pages 3994–4003. Ramesh Nallapati, Bowen Zhou, Cicero Nogueira dos santos, Caglar Gulcehre, and Bing Xiang. 2016. Abstractive text summarization using sequence-tosequence rnns and beyond. In CoNLL. Eric W Noreen. 1989. Computer-intensive methods for testing hypotheses. Wiley New York. Ramakanth Pasunuru and Mohit Bansal. 2017. Multitask video captioning with video and entailment generation. In ACL. Ramakanth Pasunuru and Mohit Bansal. 2018. Multireward reinforced summarization with saliency and entailment. In NAACL. Ramakanth Pasunuru, Han Guo, and Mohit Bansal. 2017. Towards improving abstractive summarization via entailment generation. In NFiS@EMNLP. Romain Paulus, Caiming Xiong, and Richard Socher. 2018. A deep reinforced model for abstractive summarization. In ICLR. Pranav Rajpurkar, Jian Zhang, Konstantin Lopyrev, and Percy Liang. 2016. Squad: 100,000+ questions for machine comprehension of text. In EMNLP. Sebastian Ruder, Joachim Bingel, Isabelle Augenstein, and Anders Sogaard. 2017. Sluice networks: Learning what to share between loosely related tasks. CoRR, abs/1705.08142. Alexander M Rush, Sumit Chopra, and Jason Weston. 2015. A neural attention model for abstractive sentence summarization. In EMNLP. Abigail See, Peter J Liu, and Christopher D Manning. 2017. Get to the point: Summarization with pointergenerator networks. In ACL. Jun Suzuki and Masaaki Nagata. 2016. Rnn-based encoder-decoder approach with word frequency estimation. In EACL. Jiwei Tan, Xiaojun Wan, and Jianguo Xiao. 2017. Abstractive document summarization with a graphbased attentional neural model. In ACL. Oriol Vinyals, Meire Fortunato, and Navdeep Jaitly. 2015. Pointer networks. In NIPS, pages 2692–2700. Lu Wang, Hema Raghavan, Vittorio Castelli, Radu Florian, and Claire Cardie. 2013. A sentence compression based framework to query-focused multidocument summarization. In ACL. Matthew D Zeiler and Rob Fergus. 2014. Visualizing and understanding convolutional networks. In European conference on computer vision, pages 818– 833. Springer.
2018
64
Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Long Papers), pages 698–708 Melbourne, Australia, July 15 - 20, 2018. c⃝2018 Association for Computational Linguistics 698 Modeling and Prediction of Online Product Review Helpfulness: A Survey Gerardo Ocampo Diaz and Vincent Ng Human Language Technology Research Institute University of Texas at Dallas Richardson, TX 75083-0688 {godiaz,vince}@hlt.utdallas.edu Abstract As the popularity of free-form usergenerated reviews in e-commerce and review websites continues to increase, there is a growing need for automatic mechanisms that sift through the vast number of reviews and identify quality content. Online review helpfulness modeling and prediction is a task which studies the factors that determine review helpfulness and attempts to accurately predict it. This survey paper provides an overview of the most relevant work on product review helpfulness prediction and understanding in the past decade, discusses gained insights, and provides guidelines for future research. 1 Introduction Research on the computational modeling and prediction of online review helpfulness has generally proceeded in two directions. One concerns the automatic prediction of the helpfulness of a review, where helpfulness is typically defined as the fraction of “helpful” votes it receives. Review helpfulness research in the NLP and text mining communities has largely focused on identifying textual content features of a review that are useful for automatic helpfulness prediction. The other direction concerns understanding the nature of helpfulness, where researchers seek to understand the process of human evaluation of review helpfulness and the factors that influence it. The increasing popularity of modeling and prediction of review helpfulness since its inception more than a decade ago can be attributed to its practical significance. Nowadays, customers regularly rely on different kinds of user reviews (e.g., hotels, restaurants, products, movies) to decide what to spend their money on. Given the large number of reviews available in web platforms, a review helpfulness prediction system could substantially save people’s time by allowing them to focus on the most helpful reviews. Hence, a successful review helpfulness prediction system could be as useful as a product recommender system. Unfortunately, unlike in many key areas of research in NLP, it is by no means easy to determine the state of the art in automatic helpfulness prediction. Empirical comparisons are complicated for at least two reasons. First, historically, systems have been trained on different datasets, not all of which are publicly available. Second, researchers have not built on the successes of each other, evaluating their ideas against baselines that are not necessarily the state of the art. Worse still, new features are not always properly evaluated. This somewhat disorganized situation can be attributed in part to the lack of a common forum for researchers to discuss a long-term vision and a roadmap for research in this area. Our goal in this survey is to present an overview of the current state of research on computational modeling and prediction of product review helpfulness. Our focus on product reviews is motivated by the fact that they are the most widely studied type of review. Despite this focus, it is by no means the case that our work is only applicable to product reviews. While online platforms differ in objectives and review domains (e.g., Amazon is an online product store, Yelp is a business review website, and TripAdvisor is a booking website for a variety of travel activities), the principles that govern the helpfulness voting process are robust across platforms and domains. This means that most, if not all, of our findings are transferable to other kinds of online reviews. We believe that this survey will be useful to researchers and developers interested in a better understanding of the mechanisms behind review helpfulness. 699 2 Datasets The main source of product reviews used in past research is Amazon.com, but interesting work has been done on data from Ciao.com (a now defunct product review website). The main difference between these two sources is the metadata associated with them: Amazon.com offers anonymous voting information, whereas Ciao attaches userIDs to helpfulness votes. Ciao also uses helpfulness votes in the range of 0 to 5, whereas Amazon votes are binary. Furthermore, Ciao offers information on a social trust network, where users choose to connect to reviewers if they find their reviews consistently helpful, unlike Amazon.com, which does not offer any such social trust network. These differences have allowed researchers to make observations on Ciao.com data that cannot be made on Amazon.com. Datasets are collected from the aforementioned sources through web scraping or APIs. When it comes to Amazon datasets, researchers can choose one of two pre-collected datasets: the Multi-Domain Sentiment Dataset1(Blitzer et al., 2007) (MDSD) and the Amazon Review Dataset2 (McAuley et al., 2015; He and McAuley, 2016) (ARD). These datasets have a similar number of product categories (25 and 24, respectively). However, the latest version of MDSD contains 1,422,530 reviews, while ARD contains 142.8 million reviews. Furthermore, ARD offers a variety of metadata that is not present in MDSD (e.g., product salesrank). To the best of our knowledge, there is only one pre-collected Ciao dataset3 (302,232 reviews, 43,666 users, and 8,894,899 helpfulness votes), which was made available by Tang et al. (2013). Few researchers have used these pre-collected datasets, however. Instead, most have relied on collecting their own datasets directly from websites. As mentioned before, the general lack of testing on pre-collected datasets has made system comparisons difficult. The majority of researchers simply use helpfulness scores (the fraction of users who vote a review as helpful) as found in websites as ground truth for system training and evaluation. Given that these scores are volatile when reviews have few votes, researchers frequently filter out reviews 1https://www.cs.jhu.edu/˜mdredze/ datasets/sentiment/ 2http://jmcauley.ucsd.edu/data/amazon/ 3https://www.cse.msu.edu/˜tangjili/ trust.html Votes : [97, 102] Text : I’m a much bigger fan of the Targus folding keyboard. For starters it folds into the size of a handspring. Second of all the Landware version’s keys are incredibly small. The one feature benefit of landware is that it’s a rigid design so it can be used on your lap - while the Targus version is very flexible and needs to be placed on a flat surface to type. Figure 1: Example Review that do not have a minimum number of votes. Some researchers have argued that helpfulness scores might not be good indicators of actual helpfulness, and have resorted to rating or ranking reviews themselves (Liu et al., 2007; Tsur and Rappoport, 2009; Yang et al., 2015), but these approaches are not the norm. Researchers have observed interesting patterns in review datasets. For instance, positive reviews are more likely to have high helpfulness scores (O’Mahony et al., 2010; Huang et al., 2015), top ranking reviews hold a disproportionate amount of votes when compared to lower-ranked reviews (Liu et al., 2007), and more recent reviews tend to get fewer votes than older reviews (Liu et al., 2007). Although some of these effects may be the consequence of website voting mechanisms (e.g., Amazon shows reviews based on their helpfulness), they should be taken in consideration when selecting and pre-processing datasets. Perhaps the most important observation is that helpfulness scores may not be strongly correlated to review quality (Liu et al., 2007; DanescuNiculescu-Mizil et al., 2009; Tsur and Rappoport, 2009; Ghose and Ipeirotis, 2011; Yang et al., 2015). In at least one study, independent annotators agreed more frequently (85%) with an alternate helpfulness ranking than with one based on helpfulness scores (Tsur and Rappoport, 2009). The example review in Figure 1 shows discrepancies between quality and score. While this review is relatively short and contains only a couple of judgments on its product, 97 out of 102 people voted it as helpful (0.95 score). The quality of this review does not seem to match its near-perfect score. As we will see in Section 4, these discrepancies could be explained as the consequence of several moderating factors, which have a direct influence on the helpfulness voting process but are largely ignored in current helpfulness prediction systems. 700 3 Helpfulness Prediction Helpfulness prediction tasks include score regression (predicting the helpfulness score h ∈[0, 1] of a review), binary review classification (classifying a review as helpful or not), and review ranking (ordering a set of reviews by their helpfulness). In this section, we present the evaluation measures and approaches explored in past work. 3.1 Performance Measures Regarding performance measures, classification tasks have used Precision, Recall, and F-measure. Regression tasks have mostly used mean squared error (MSE), which measures the average of the sum of the squared error, and root mean squared error (RMSE), which is defined as the square root of MSE. Ranking systems have used Normalized Discounted Cumulative Gain (NDCG), which is popularly used to measure the relevance of search results in information retrieval (here, helpfulness is used as a measure of relevance), and NDCG@k, a special version of NDCG that only takes into account the top k items in a ranking (this is used because users only read a limited number of reviews). Researchers have also used Pearson and Spearman correlations to measure model fit and ranking performance. 3.2 Approaches Next, we provide a high-level overview of the approaches that have been employed to predict the helpfulness of online product reviews. Regression has primarily been attempted through support vector regression (Kim et al., 2006; Zhang and Varadarajan, 2006; Yang et al., 2015). However, probabilistic matrix factorization (Tang et al., 2013), linear regression (Lu et al., 2010), and extended tensor factorization models (Moghaddam et al., 2012) have successfully been used to integrate sophisticated constraints into the learning process and have achieved improvements over regular regression models. Multi-layer neural networks have also been used towards this purpose (Lee and Choeh, 2014). In particular, there seems to be progress toward more sophisticated models. For instance, Mukherjee et al. (2017) used a HMM-LDA based model to jointly infer reviewer expertise, predict aspects, and review helpfulness, which showed significant improvement over simpler models. Classification approaches have mostly been based on SVMs (Kim et al., 2006; Hong et al., 2012; Zeng et al., 2014; Krishnamoorthy, 2015), but thresholded linear regression models (Ghose and Ipeirotis, 2011), Naive Bayes, Random Forests, J48 and JRip have also been used (O’Mahony et al., 2010; Ghose and Ipeirotis, 2011; Krishnamoorthy, 2015). Recent work has also approached this task with neural networks (Malik and Hussain, 2017; Chen et al., 2018). Regarding ranking, some researchers have used ranking-specific methods such as SVM ranking (Tsur and Rappoport, 2009; Hong et al., 2012), but others have attempted to recover rankings from classification (O’Mahony and Smyth, 2009, 2010) or regression (Mukherjee et al., 2017) outputs. Table 1 provides an overview of some of the most relevant features used in helpfulness prediction systems, explains the intuition behind them and, whenever possible, their correlation to helpfulness and impact on performance. Here, we differentiate primarily between content and context features. Content features focus on information directly derived from the review, such as review text and star rating, whereas context features focus on information from outside the review, such as reviewer/user information. Content features include Review Length Features, which are based on the intuition that longer reviews have more information and are thus more helpful; Readability Features, which are based on the conjecture that if a review is easier to read, it will be found helpful by more users; Word-Based Features, which are based on the idea of identifying key words whose presence indicates the importance of the information found in a review; WordCategory Features, which identify the presence of words belonging to specific word lists; and Content Divergence Features, which measure how different the contents of the review are from specific reference texts. Context features include Reviewer Features, which collect meaningful reviewer historical information to predict future helpfulness scores; and User-Reviewer Idiosyncrasy Features, which attempt to capture the similarity between users and reviewers. We also include a couple of Miscellaneous Features, which are based on metadata and sentiment analysis; these features are better understood in the context of the moderating factors presented in Section 4. Researchers have managed to mostly agree on some observations regarding which features are 701 Feature Description Comments Content Features Review Length Features: Measure review length using different metrics. Average Sentence Length Used in Liu et al. (2007), Lu et al. (2010), and Yang et al. (2015) without studying its individual predictive power. No. of Sentences Used in Liu et al. (2007), Lu et al. (2010), Yang et al. (2015) Number of Words Positive correlation (Mudambi and Schuff, 2010); shown to subdue sentence features (Kim et al., 2006). Readability Features: Measure how easy a review is to read. Readability Measures how easy a text is to read Ghose and Ipeirotis (2011) and Korfiatis et al. (2012) found a positive correlation. Spelling Errors Ghose and Ipeirotis (2011) found a negative correlation. Paragraph Metrics Avg. paragraph length, no. of paragraphs Kim et al. (2006) found an insignificant difference when included in a binary classifier. Word-Based Features: Indicate the presence of meaningful key words. Unigram TF-IDF Degree of word importance in relation to all reviews for a product Kim et al. (2006) observed a positive correlation and performance improvement when combined with review length. Dominant Terms Presence of particularly important terms for a specific book Tsur and Rappoport (2009) based entire system on this metric. Tailored for book reviews: similar to UGR TF-IDF. Word-Category Features: Indicate the presence of words of lists of semantically related words in review. Product features Attempt to identify the presence of important topics Liu et al. (2007) showed 2.89-3.22% improvement. Hong et al. (2012) presented a system which improves ∼8% accuracy over Kim et al. (2006) and Liu et al. (2007) but the individual predictive power of the feature was not analyzed. Kim et al. (2006) found it inferior to UGR TF-IDF. Subjective Tokens Words taken from lists of subjective adjectives and nouns Zhang and Varadarajan (2006) found it “barely” correlated with helpfulness. No significant performance improvement. Sentiment Words Attempt to capture the presence of opinions, analyses, emotions etc. Kim et al. (2006) found these features inferior to UGR TFIDF; Yang et al. (2015) found the opposite and significant improvement over simple text features regression. Syntactic tokens A variety of tokens including nouns, adjectives, adverbs, wh- determiners etc. Kim et al. (2006) found no performance gains; Hong et al. (2012) built a system with volition auxiliaries and sentence tense which showed ∼8% accuracy improvement over Kim et al. (2006) and Liu et al. (2007), but the individual predictive power of these features was not studied. Content Divergence Features: Measure the difference between reviews and some reference text. Review-product descr. divergence Helpful reviews should echo the contents of product description Zhang and Varadarajan (2006) found no significant improvement in model correlation. Sentiment divergence The mainstream opinion polarity for a product and its strength are compared to those of the review Hong et al. (2012) presented a system which improved ∼ 8% accuracy over Kim et al. (2006) and Liu et al. (2007) but the individual predictive power of the feature was not analyzed. KL average review divergence Divergence between the unigram language model of the review and aggregated product reviews Lu et al. (2010) introduced it in their baseline model along with a variety of features; the individual predictive power of the feature was not studied. Miscellaneous Features Star rating The review-assigned product star rating Positively correlated to helpfulness (Huang et al., 2015). Influence explained by Danescu-Niculescu-Mizil et al. (2009) and Mudambi and Schuff (2010) (see Sections 4.4, 4.2). Subjectivity The probability of a review and its sentences being subjective Based on the conjecture that readers prefer subjective or objective info. based on product type. Empirical evidence found in Ghose and Ipeirotis (2011) (see Section 4.5). Context Features Reviewer Features: Capture reviewer statistics. # Past Reviews Previous reviews written by reviewer No influence found by Huang et al. (2015). # Helpful Votes Previous votes received by reviewer No influence found by Huang et al. (2015). Avg. Helpfulness Reviewer avg. past helpfulness Positive correlation found by Huang et al. (2015). Mixed effects found by Ghose and Ipeirotis (2011). User-Reviewer Idiosyncrasy: Capture the similarity between users and reviewers. Connection Strength User-Reviewer connection strength in a social network using the metric introduced in Tang et al. (2012) Relative performance increase of 1.15-28.38% (Lu et al., 2010; Tang et al., 2013) (see Section 4.3) User-Reviewer Product Rating Similarity User-Reviewer product rating history similarity Relative performance increase of 28.38% (Tang et al., 2013) (see Section 4.3) Table 1: Summary of Observed Features on Helpfulness 702 useful for helpfulness prediction4. Review length has been shown multiple times to be strongly (positively) correlated to helpfulness (Kim et al., 2006; Liu et al., 2007; Otterbacher, 2009; Mudambi and Schuff, 2010; Cao et al., 2011; Pan and Zhang, 2011; Yang et al., 2015; Bjering et al., 2015; Huang et al., 2015; Salehan and Kim, 2016) with only few researchers disagreeing on the existence of the correlation (Zhang and Varadarajan, 2006; Korfiatis et al., 2012). There is general agreement that a review’s star rating can also be useful for helpfulness prediction. Some researchers use the extremity of the rating (positive, negative, neutral) as a feature (positive and negative reviews are seen as more useful than neutral reviews) (Ghose and Ipeirotis, 2011), while others use star ratings directly (Kim et al., 2006; Mudambi and Schuff, 2010; Pan and Zhang, 2011; Zeng et al., 2014; Huang et al., 2015; Bjering et al., 2015). Some researchers argue that star rating is useful because of the presence of positivity bias (i.e., reviews with positive star ratings are seen as more helpful), while few researchers disagree on the existence of a connection between star ratings and helpfulness (Otterbacher, 2009). Review readability metrics, which measure how “easy” it is to read a review, have been found to have a positive correlation to helpfulness (Ghose and Ipeirotis, 2011; Korfiatis et al., 2012), but have not been as thoroughly tested as other features. A recurrent idea is that of capturing review content relevance: unigram TF-IDF statistics (the relative importance of the words in a review when compared to other reviews of the same product) (Kim et al., 2006), dominant terms (computed using a custom metric similar to TF-IDF, but tailored for book reviews) (Tsur and Rappoport, 2009), and latent review topics (the themes present in the review) (McAuley and Leskovec, 2013; Mukherjee et al., 2017) stand out particularly. 3.3 The State of Helpfulness Prediction The classical approach to helpfulness prediction has consisted of finding new hand-crafted features that can improve system performance. Although many interesting features continue to be found (e.g., emotion (Martin and Pu, 2014), aspect (Yang et al., 2016), and argument (Liu et al., 2017) based features), advances have been hindered by the lack 4We do not discuss features that are not helpful since, in general, they are not as thoroughly tested as those mentioned here. of standard datasets, which are needed for performance comparisons, and feature ablation studies, which are needed to properly evaluate the contribution of newly proposed features. Even so, as in many other areas of NLP, recent systems based on neural network architectures have shown performance increases both when using hand-crafted features (Lee and Choeh, 2014; Malik and Hussain, 2017) and when performing raw-text predictions (Chen et al., 2018). Moreover, recent systems have been shown to be able to tackle domain knowledge transfer considerably well (Chen et al., 2018). Although these systems were not compared against a robust hand-crafted feature baseline, the fact that authors are beginning to use pre-collected datasets (ARD) enables fairer comparisons. Intuitively, we expect models based on neural network architectures to be better at capturing latent semantics, as well as some of the feature interactions we will present in Section 4. In parallel, systems that have incorporated user and reviewer features, particularly those that learn from individual user votes (Tang et al., 2013), have shown large performance increases over extensive hand-crafted-only feature baselines (Lu et al., 2010; Tang et al., 2013), and more sophisticated models focused on review semantics (Mukherjee et al., 2017) have also outperformed hand-crafted-only feature baselines significantly. 4 The Helpfulness Voting Process: Entities and Moderating Factors So far we have presented an overview of the features used in helpfulness prediction systems. With a few exceptions (Mudambi and Schuff, 2010; Ghose and Ipeirotis, 2011; Tang et al., 2013), past work on helpfulness prediction has focused exclusively on non-moderating factors (i.e., observable features which can contribute towards helpfulness scores, but cannot alter or influence the voting process itself). Even so, researchers have gained key insights on certain moderating factors (i.e., mechanisms and properties that can influence the voting process outcome). These findings are relevant not only because they can be used to enhance helpfulness prediction, but because, when put together, they constitute arguments in favor of reconsidering the helpfulness prediction task and its focus. In this section, we will present a variety of moderating factors. 703 4.1 The Voting Process and its Entities To start our discussion on moderating factors, let us provide a brief, intuitive definition of the steps involved in the helpfulness voting process and outline the entities involved in it5: 1. A reviewer, a, writes a review r on product p 2. A user, u, reads the review by reviewer a on product p and internally assigns it a score s using some criterion c. 3. If the score s is over some threshold t, the user votes the review as “helpful”. Otherwise, the user votes it as “not helpful”. Intuitively, one can expect these four entities — reviewers, users, reviews, and products — to play a role in determining the outcome of the voting process. Moreover, it is reasonable to expect both the nature of these entities and the interactions between them to be sometimes expressed through hidden features/variables. For instance, one cannot directly observe a user’s opinion of a product unless he/she writes a review, and one cannot directly observe a particular user’s information needs or a product’s nature, which would indicate what kind of review is most helpful for it. In the next subsections, we will discuss different moderating factors that have been discovered for each of these entities, the observable features that have been used to approximate them, and their effects on the voting process. 4.2 User-Product Predispositions Danescu-Niculescu-Mizil et al. (2009) showed that the difference between user and reviewer opinions can influence helpfulness votes. Since user opinions are hidden, based on the assumption that star ratings are good indicators of opinion, Danescu et al. studied the interplay between review star rating deviation from the mean (the divergence between the reviewer’s opinion and the average opinion of the product) and star rating variance (the level of opinion consensus for a product) for 1 million Amazon US book reviews, making the following observations: 1. When star rating variance is very low, the most helpful reviews are those with the average star rating. 2. With moderate variance, the most helpful reviews are those with a slightly-above-average star rating. 5Here we assume voting participation and do not attempt to reconcile it with polarity, but a deeper understanding of participation could lead to better interpretations of votes. 3. As variance becomes large, reviews with star ratings both above and below the average are more helpful (positive reviews still deemed somewhat more helpful). These observations held when controlling for review text, and constitute one of the most straightforward pieces of evidence against textonly review helpfulness understanding and prediction. Although these observations show only aggregated user behavior, they have a theoretical backing by past research (Wilson and Peterson, 1989), and hint that a deeper understanding of user opinions can lead to better prediction systems. 4.3 User-Reviewer Idiosyncrasy Tang et al. (2013) found that, by observing users’ actions, user-reviewer idiosyncrasy similarity could be measured and used to enhance helpfulness prediction. They showed that the existence and strength of connections between reviewers and users in a social network, along with product rating history similarity, moderated the general user opinion of a particular reviewer’s reviews. Specifically, they analyzed social network connections in Ciao’s circle of trust, a social network where a user connects to a reviewer if they consistently find their reviews helpful, along with users’ and reviewers’ product rating histories, and made the following observations: 1. Users are likely to think of reviews from their connected reviewers as more helpful. 2. The more strongly users connect to a reviewer, the more helpful users consider the reviews from the reviewer.6 3. Users are likely to consider the reviews from reviewers with similar product ratings as more helpful. 4. The more similar the product ratings of users and reviewers, the more helpful users consider the reviews from the reviewer. As Tang et al. proposed that differences in helpfulness scores are not necessarily a consequence of review quality, but of differences of opinion between users (if everyone thought the same way, all reviews would have a score of either 0 or 1), they were among the first to advocate for user-specific helpfulness prediction, which aims to predict how a specific user will vote, instead of predicting the 6Connection strength is measured with the metric introduced in Tang et al. (2012). 704 aggregated votes of the community. Under this approach, Tang et al. implemented their observations in a probabilistic matrix factorization framework and achieved a 28.38% relative improvement over a text-reviewer-based baseline that included an extensive set of text features present in other systems (Lu et al., 2010). This suggests that the similarity between reviewers’ idiosyncrasy as expressed in reviews and that of users can be approximated by studying user and reviewer actions. Further, the information used by Tang et al. (2013) towards this purpose is not the only kind that could prove useful. It could easily be extended to include the vast amount of user information stored by current day e-commerce websites such as Amazon. Users’ age, gender, purchase history, location, browsing and purchase patterns, and review history (both writing and rating) could be used to define prior probabilities on some user x liking the review of a reviewer y.7 As some of this information has already been used in recommender systems, it would be of interest to explore the extent to which techniques from this field (specifically those from collaborative filtering) can be applied to helpfulness prediction. 4.4 Product Nature Product nature moderates users’ information needs and the criteria of a helpful review. Online stores now have an astoundingly large catalog of products, which can be very different in price, use, target market, complexity, popularity, etc. Hence, it is reasonable to expect the information needs of users to depend at least somewhat on the product in question. Consider the task of buying a house vs buying a TV. We can easily see that the amount and nature of information needed to buy a TV or a house is considerably different. Further, the quality of these products stems from different sources: a TV’s perceived quality depends mostly on its technical features, whereas the perceived quality of a house depends to some degree on the potential buyer. Therefore, it is perfectly sensible to expect helpful reviews for products of different “types” to be different. Below we show that the nature of a product moderates the effects 7Since a reviewer’s idiosyncrasy is embodied in his/her reviews, we do not rule out the possibility that more complex text representations can also be used to approximate it. Regardless, these sources of information should still be able to complement prediction systems. of star ratings, review length, and subjectivity on helpfulness scores. Researchers have proven the influence of product nature on the helpfulness voting process by differentiating between search and experience goods. According to Nelson (1970, 1974), the quality of search goods is derived from objective attributes (e.g., a camera), whereas the quality of experience goods is based on subjective attributes (e.g., a music CD). Mudambi and Schuff (2010) first identified that review length (word count) is positively correlated to review helpfulness, and then made the following observations: • For experience goods, reviews with extreme star ratings (high or low) are associated with lower levels of helpfulness than reviews with moderate star ratings. • Review depth has a greater positive effect on the helpfulness of the review for search goods than experience goods. These observations make it clear that the nature of a product can impact the way a user will judge a review’s helpfulness. However, approximating the nature of a product is not a trivial task. As stated by Mudambi and Schuff, even if these observations hold, classifying products as search or experience goods is a complicated task, since products fall at some point along a spectrum and commonly have aspects of both search and experience goods. This means that finding methods of automatically discovering product features or classifications that influence the helpfulness voting process is an important task for future research. What other product categorizations are there that could influence helpfulness and be easily collected/computed? We propose to start by using categories already present in e-commerce websites. Intuitively, it would make sense for products under the “computers” category to be similar in their information needs. And as such, systems trained on computer reviews should learn similar parameters. As most e-commerce websites use a hierarchical product categorization system, by starting at the most specific subcategories one could potentially generalize subcategory-learned parameters into category-wide trends. 4.5 Review Nature A review’s style influences the properties that make it helpful. It is well known that when it comes to expressing opinions, the way information is presented can be almost as important as the 705 information itself. Even if two reviewers have a similar opinion on a product, the way they frame their opinion can make a big difference when it comes to how helpful their reviews are. Consider the task of deciding whether to buy a specific car. What advice could prove useful for this decision? We could consider regular advice that is mostly concerned with the car itself, comparative advice that relates various aspects of the car with its alternatives, and suggestive advice, which focuses on usage recommendations. Qazi et al. (2016) used these three types of advice to classify hotel reviews from TripAdvisor.com and made the following observations: • For comparative reviews, longer reviews are considered more helpful. • For suggestive and regular reviews, shorter reviews are more helpful. Similar findings on the influence of review nature were made by Huang et al. (2015): when differentiating between reviews written by regular and top Amazon reviewers, they made the following observations: • The influence of word count on review helpfulness is bounded (after 144 words, the effect stops) for regular reviewers. • For top reviewers, the effect is nonexistent. Similarly to product nature, an important research question for future work is how to identify and exploit review categories for effective helpfulness prediction. We expect more sophisticated textual features to be necessary to differentiate between meaningful styles of reviews. 4.6 Review Context Sipos et al. (2014) found evidence that helpfulness votes are the consequence of judgments of relative quality (i.e., how the review compares to its neighbors) and that aggregate user voting polarity is influenced by the specific review ranking that websites display at any given point in time. To prove this, they collected daily snapshots of the top 50 reviews of 595 Amazon products over a 5 month period. Four months after the data collection period ended, they collected the full review rankings for all 595 products. This final review ranking was taken to be the “true” ranking. They studied daily changes and observed that: • A review receives more positive votes when it is under-ranked (under its final ranking). • A review receives more positive votes when it is superior to its neighbors. • A review receives fewer positive votes when it is over-ranked (over its final ranking). • A review receives fewer positive votes when it is locally inferior to its neighbors. Sipos et al. noted that these observations are consistent with the interpretation that users vote to correct “misorderings” in the ranking. This has important consequences for user-specific helpfulness prediction systems. Recall that votes may express judgments over a set of reviews. If researchers build training sets that identify user votes and contain sufficient information to replicate context at the time of voting, systems could learn more about user preferences: a vote would no longer inform solely on a user’s perceived helpfulness of a review x, but on the user’s perceived helpfulness of x with respect to its neighbors. This could be particularly useful in sparsity scenarios, and could lead to better helpfulness predictions. 5 Conclusions and Recommendations Online product review helpfulness modeling and prediction is a multi-faceted task that involves using content and context information to understand and predict helpfulness scores. Researchers now have at their disposal at least three public, pre-collected product review datasets — MDSD, ARD, and Ciao — to build and test systems. Although significant advances have been made on finding hand-crafted features for helpfulness prediction, effective comparisons between proposed approaches have been hindered by the lack of standard evaluation datasets, well-defined baselines, and feature ablation studies. However, there have been exciting developments in helpfulness prediction: systems that have attempted to exploit user and reviewer information, along with those based on sophisticated models (e.g., probabilistic matrix factorization, HMM-LDA) and neural network architectures, are promising prospects for future work. Furthermore, a variety of insightful observations have been made on moderating factors. In particular, product opinions, user idiosyncrasy, product and review nature, along with review voting context have been shown to influence the way users vote. This provides suggestive evidence that researchers should adopt a holistic view of the helpfulness voting process, which may require information not present in current datasets. 706 We conclude our survey with several recommendations for future work on computational modeling and prediction of review helpfulness. Task If one acknowledges the role that users play in determining whether a review is helpful or not, it seems contradictory to insist on predicting helpfulness scores, which represent the average perception of a subset of users that (1) may not be representative of the entire population and (2) may not serve users well if their perceptions do not align with the subset of users that voted (even if the subset consisted of the entire population). This is why we consider that user-specific helpfulness prediction, first presented in Moghaddam et al. (2012) and Tang et al. (2013), should be the goal of future work, as it allows systems to tailor their predictions to users’ preferences and needs (much like a recommender system). Note that pursuing user-specific helpfulness prediction is not enough. A substantial amount of work must still be done to find, approximate, and implement moderating factors in helpfulness prediction systems, as well as build models that can adequately reflect the effects of these factors. Data Given that we recommend user-specific helpfulness prediction, we propose the development of a gold standard that contains information that can facilitate the design of user-specific models (e.g., records of who voted and how, data relevant to user-profiling recommendations such as age, location, social networks, purchase and browsing history and patterns, product reviews written, and review and product rating histories). Furthermore, as users frequently vote on reviews in a different context (scores and neighboring reviews can vary over time), this dataset should include temporal information, which would allow researchers to reconstruct the context under which votes are cast. To build this dataset, we recommend that researchers work with companies such as Amazon, which may have such information. Features and knowledge sources While we encourage the development of user-specific helpfulness prediction, we by no means imply that a model should be trained for each user. In fact, this may not be feasible if a user has cast only a small number of votes. There are multiple ways to approach this task. One is to train a user-specific model for each cluster of “similar” users. Taking inspirations from collaborative filtering, we could define or learn user similarity based on their purchasing/browsing/review and product rating histories (Liu et al., 2014) as well as profiling information (Krulwich, 1997), which should be available in the aforementioned dataset. Further, “similar” reviews (i.e., reviews on which users vote similarly) could be exploited (Sarwar et al., 2001; Linden et al., 2003). Once product and user/reviewer factors are incorporated into a model, it may become feasible to use past instances to predict helpfulness votes (how similar is a test instance to past situations where a user has voted “helpful”?). Baseline systems To design a strong baseline system, first, researchers should consider all proposed features so far, including content features, context features, and features used to approach moderating factors. Second, combinations of these features should be systematically tested on the different models proposed by researchers. As we have seen that product nature influences the voting process, these tests should be conducted over different products and product categories. We recommend identifying specific experience and search products, since the effects of product nature have already been proven for them. Although ideally, these tests would be carried out on our proposed gold-standard dataset, we believe that the Ciao dataset introduced in Tang et al. (2013) and ARD (McAuley et al., 2015) can prove useful to define a baseline in the short term. Towards this purpose, the systems proposed in Tang et al. (2013), Mukherjee et al. (2017), Malik and Hussain (2017), and Chen et al. (2018) could serve as baselines after being enriched with extra features. Other platforms, review domains and languages While we focused on Amazon product reviews written in English, the majority of the features discussed in Section 3 are platform-, domainand language-independent, and the existence and importance of moderating factors described in Section 4 is by no means limited to product reviews. Consequently, we encourage researchers to evaluate the usefulness of these features and study these moderating factors in different domains, platforms, and languages, possibly identifying new features and moderating factors. Acknowledgments We thank the three anonymous reviewers for their detailed and insightful comments on an earlier draft of the paper. This work was supported in part by USAF Grant FA9550-15-1-0346. 707 References Einar Bjering, Lars Jaakko Havro, and Oystein Moen. 2015. An empirical investigation of self-selection bias and factors influencing review helpfulness. International Journal of Business and Management, 10(7):16–30. John Blitzer, Mark Dredze, and Fernando Pereira. 2007. Biographies, bollywood, boom-boxes and blenders: Domain adaptation for sentiment classification. In Proceedings of the 45th Annual Meeting of the Association of Computational Linguistics, pages 440–447. Qing Cao, Wenjing Duan, and Qiwei Gan. 2011. Exploring determinants of voting for the helpfulness of online user reviews: A text mining approach. Decision Support Systems, 50(2):511–521. Cen Chen, Yinfei Yang, Jun Zhou, Xiaolong Li, and Forrest Sheng Bao. 2018. Cross-domain review helpfulness prediction based on convolutional neural networks with auxiliary domain discriminators. In Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 2 (Short Papers), pages 602–607. Cristian Danescu-Niculescu-Mizil, Gueorgi Kossinets, Jon Kleinberg, and Lillian Lee. 2009. How opinions are received by online communities: A case study on Amazon.com helpfulness votes. In Proceedings of the 18th International Conference on World Wide Web, pages 141–150. Anindya Ghose and Panagiotis G. Ipeirotis. 2011. Estimating the helpfulness and economic impact of product reviews: Mining text and reviewer characteristics. IEEE Transactions on Knowledge and Data Engineering, 23(10):1498–1512. Ruining He and Julian McAuley. 2016. Ups and downs: Modeling the visual evolution of fashion trends with one-class collaborative filtering. In Proceedings of the 25th International Conference on World Wide Web, pages 507–517. Yu Hong, Jun Lu, Jianmin Yao, Qiaoming Zhu, and Guodong Zhou. 2012. What reviews are satisfactory: Novel features for automatic helpfulness voting. In Proceedings of the 35th International ACM SIGIR Conference on Research and Development in Information Retrieval, pages 495–504. Albert H. Huang, Kuanchin Chen, David C. Yen, and Trang P. Tran. 2015. A study of factors that contribute to online review helpfulness. Computers in Human Behavior, 48:17–27. Soo-Min Kim, Patrick Pantel, Tim Chklovski, and Marco Pennacchiotti. 2006. Automatically assessing review helpfulness. In Proceedings of the 2006 Conference on Empirical Methods in Natural Language Processing, pages 423–430. Nikolaos Korfiatis, Elena Garc´ıa-Bariocanal, and Salvador S´anchez-Alonso. 2012. Evaluating content quality and helpfulness of online product reviews: The interplay of review helpfulness vs. review content. Electronic Commerce Research and Applications, 11(3):205–217. Srikumar Krishnamoorthy. 2015. Linguistic features for review helpfulness prediction. Expert Systems with Applications, 42(7):3751–3759. Bruce Krulwich. 1997. Lifestyle finder: Intelligent user profiling using large-scale demographic data. AI Magazine, 18(2):37–45. Sangjae Lee and Joon Yeon Choeh. 2014. Predicting the helpfulness of online reviews using multilayer perceptron neural networks. Expert Systems with Applications, 41(6):3041–3046. Greg Linden, Brent Smith, and Jeremy York. 2003. Amazon.com recommendations: item-to-item collaborative filtering. IEEE Internet Computing, 7(1):76–80. Haifeng Liu, Zheng Hu, Ahmad Mian, Hui Tian, and Xuzhen Zhu. 2014. A new user similarity model to improve the accuracy of collaborative filtering. Knowledge-Based Systems, 56:156–166. Haijing Liu, Yang Gao, Pin Lv, Mengxue Li, Shiqiang Geng, Minglan Li, and Hao Wang. 2017. Using argument-based features to predict and analyse review helpfulness. In Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing, pages 1358–1363. Jingjing Liu, Yunbo Cao, Chin-Yew Lin, Yalou Huang, and Ming Zhou. 2007. Low-quality product review detection in opinion summarization. In Proceedings of the 2007 Joint Conference on Empirical Methods in Natural Language Processing and Computational Natural Language Learning, pages 334–342. Yue Lu, Panayiotis Tsaparas, Alexandros Ntoulas, and Livia Polanyi. 2010. Exploiting social context for review quality prediction. In Proceedings of the 19th International Conference on World Wide Web, pages 691–700. M.S.I. Malik and Ayyaz Hussain. 2017. Helpfulness of product reviews as a function of discrete positive and negative emotions. Computers in Human Behavior, 73:290–302. Lionel Martin and Pearl Pu. 2014. Prediction of helpful reviews using emotions extraction. In Proceedings of the Twenty-Eighth AAAI Conference on Artificial Intelligence, pages 1551–1557. Julian McAuley and Jure Leskovec. 2013. Hidden factors and hidden topics: Understanding rating dimensions with review text. In Proceedings of the 7th ACM Conference on Recommender Systems, pages 165–172. 708 Julian McAuley, Christopher Targett, Qinfeng Shi, and Anton van den Hengel. 2015. Image-based recommendations on styles and substitutes. In Proceedings of the 38th International ACM SIGIR Conference on Research and Development in Information Retrieval, pages 43–52. Samaneh Moghaddam, Mohsen Jamali, and Martin Ester. 2012. ETF: Extended tensor factorization model for personalizing prediction of review helpfulness. In Proceedings of the Fifth ACM International Conference on Web Search and Data Mining, pages 163– 172. Susan M. Mudambi and David Schuff. 2010. What makes a helpful online review? A study of customer reviews on Amazon.com. MIS Quarterly, 34(1):185–200. Subhabrata Mukherjee, Kashyap Popat, and Gerhard Weikum. 2017. Exploring latent semantic factors to find useful product reviews. In Proceedings of the 2017 SIAM International Conference on Data Mining, pages 480–488. Phillip Nelson. 1970. Information and consumer behavior. Journal of Political Economy, 78(2):311– 329. Phillip Nelson. 1974. Advertising as information. Journal of Political Economy, 82(4):729–754. Michael P. O’Mahony, P´adraig Cunningham, and Barry Smyth. 2010. An assessment of machine learning techniques for review recommendation. In Artificial Intelligence and Cognitive Science, pages 241–250. Michael P. O’Mahony and Barry Smyth. 2009. Learning to recommend helpful hotel reviews. In Proceedings of the Third ACM Conference on Recommender Systems, pages 305–308. Michael P. O’Mahony and Barry Smyth. 2010. A classification-based review recommender. Knowledge-Based Systems, 23(4):323–329. Jahna Otterbacher. 2009. Helpfulness in online communities: A measure of message quality. In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems, pages 955–964. Yue Pan and Jason Q. Zhang. 2011. Born unequal: A study of the helpfulness of user-generated product reviews. Journal of Retailing, 87(4):598–612. Aika Qazi, Karim Bux Shah Syed, Ram Gopal Raj, Erik Cambria, Muhammad Tahir, and Daniyal Alghazzawi. 2016. A concept-level approach to the analysis of online review helpfulness. Computers in Human Behavior, 58:75–81. Mohammad Salehan and Dan J. Kim. 2016. Predicting the performance of online consumer reviews: A sentiment mining approach to big data analytics. Decision Support Systems, 81:30–40. Badrul Sarwar, George Karypis, Joseph Konstan, and John Riedl. 2001. Item-based collaborative filtering recommendation algorithms. In Proceedings of the 10th International Conference on World Wide Web, pages 285–295. Ruben Sipos, Arpita Ghosh, and Thorsten Joachims. 2014. Was this review helpful to you?: It depends! Context and voting patterns in online content. In Proceedings of the 23rd International Conference on World Wide Web, pages 337–348. Jiliang Tang, Huiji Gao, Xia Hu, and Huan Liu. 2013. Context-aware review helpfulness rating prediction. In Proceedings of the 7th ACM Conference on Recommender Systems, pages 1–8. Jiliang Tang, Huiji Gao, and Huan Liu. 2012. mTrust: Discerning multi-faceted trust in a connected world. In Proceedings of the Fifth ACM International Conference on Web Search and Data Mining, pages 93– 102. Oren Tsur and Ari Rappoport. 2009. Revrank: A fully unsupervised algorithm for selecting the most helpful book reviews. In International AAAI Conference on Web and Social Media, pages 154–161. William R. Wilson and Robert A. Peterson. 1989. Some limits on the potency of word-of-mouth information. Advances in Consumer Research, 16:23– 29. Yinfei Yang, Cen Chen, and Forrest Sheng Bao. 2016. Aspect-based helpfulness prediction for online product reviews. In Proceedings of the 28th IEEE International Conference on Tools with Artificial Intelligence, pages 836–843. Yinfei Yang, Yaowei Yan, Minghui Qiu, and Forrest Bao. 2015. Semantic analysis and helpfulness prediction of text for online product reviews. In Proceedings of the 53rd Annual Meeting of the Association for Computational Linguistics and the 7th International Joint Conference on Natural Language Processing (Volume 2: Short Papers), pages 38–44. Yi-Ching Zeng, Tsun Ku, Shih-Hung Wu, Liang-Pu Chen, and Gwo-Dong Chen. 2014. Modeling the helpful opinion mining of online consumer reviews as a classification problem. International Journal of Computational Linguistics & Chinese Language Processing, 19(2):17–32. Zhu Zhang and Balaji Varadarajan. 2006. Utility scoring of product reviews. In Proceedings of the 15th ACM International Conference on Information and Knowledge Management, pages 51–57.
2018
65
Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Long Papers), pages 709–719 Melbourne, Australia, July 15 - 20, 2018. c⃝2018 Association for Computational Linguistics 709 Mining Cross-Cultural Differences and Similarities in Social Media Bill Yuchen Lin1∗ Frank F. Xu1∗ Kenny Q. Zhu1 Seung-won Hwang2 1Shanghai Jiao Tong University, Shanghai, China {yuchenlin, frankxu}@sjtu.edu.cn, [email protected] 2Yonsei University, Seoul, Republic of Korea [email protected] Abstract Cross-cultural differences and similarities are common in cross-lingual natural language understanding, especially for research in social media. For instance, people of distinct cultures often hold different opinions on a single named entity. Also, understanding slang terms across languages requires knowledge of cross-cultural similarities. In this paper, we study the problem of computing such cross-cultural differences and similarities. We present a lightweight yet effective approach, and evaluate it on two novel tasks: 1) mining cross-cultural differences of named entities and 2) finding similar terms for slang across languages. Experimental results show that our framework substantially outperforms a number of baseline methods on both tasks. The framework could be useful for machine translation applications and research in computational social science. 1 Introduction Computing similarities between terms is one of the most fundamental computational tasks in natural language understanding. Much work has been done in this area, most notably using the distributional properties drawn from large monolingual textual corpora to train vector representations of words or other linguistic units (Pennington et al., 2014; Le and Mikolov, 2014). However, computing cross-cultural similarities of terms between different cultures is still an open research question, which is important in cross-lingual natural language understanding. In this paper, we address cross-cultural research questions such as these: ∗Both authors contributed equally. #Nanjing says no to Nagoya# This small Japan, is really irritating. What is this? We Chinese people are tolerant of good and evil, and you? People do things, and the gods are watching. Japanese, be careful, and beware of thunder chop! (via Bing Translation) Figure 1: Two social media messages about Nagoya from different cultures in 2012 1. Were there any cross-cultural differences between Nagoya (a city in Japan) for native English speakers and 名古屋(Nagoya in Chinese) for Chinese people in 2012? 2. What English terms can be used to explain “浮云” (a Chinese slang term)? These kinds of questions about cross-cultural differences and similarities are important in crosscultural social studies, multi-lingual sentiment analysis, culturally sensitive machine translation, and many other NLP tasks, especially in social media. We propose two novel tasks in mining them from social media. The first task (Section 4) is to mine crosscultural differences in the perception of named entities (e.g., persons, places and organizations). Back in 2012, in the case of “Nagoya”, many native English speakers posted their pleasant travel experiences in Nagoya on Twitter. However, Chinese people overwhelmingly greeted the city with anger and condemnation on Weibo (a Chinese version of Twitter), because the city mayor denied the truthfulness of the Nanjing Massacre. Figure 1 illustrates two example microblog messages about Nagoya in Twitter and Weibo respectively. The second task (Section 5) is to find similar terms for slang across cultures and languages. Social media is always a rich soil where slang terms emerge in many cultures. For example, 710 “浮云” literally means “floating clouds”, but now almost equals to “nothingness” on the Chinese web. Our experiments show that well-known online machine translators such as Google Translate are only able to translate such slang terms to their literal meanings, even under clear contexts where slang meanings are much more appropriate. Enabling intelligent agents to understand such cross-cultural knowledge can benefit their performances in various cross-lingual language processing tasks. Both tasks share the same core problem, which is how to compute cross-cultural differences (or similarities) between two terms from different cultures. A term here can be either an ordinary word, an entity name, or a slang term. We focus on names and slang in this paper for they convey more social and cultural connotations. There are many works on cross-lingual word representation (Ruder et al., 2017) to compute general cross-lingual similarities (CamachoCollados et al., 2017). Most existing models require bilingual supervision such as aligned parallel corpora, bilingual lexicons, or comparable documents (Sarath et al., 2014; Koˇcisk´y et al., 2014; Upadhyay et al., 2016). However, they do not purposely preserve social or cultural characteristics of named entities or slang terms, and the required parallel corpora are rare and expensive. In this paper, we propose a lightweight yet effective approach to project two incompatible monolingual word vector spaces into a single bilingual word vector space, known as social vector space (SocVec). A key element of SocVec is the idea of “bilingual social lexicon”, which contains bilingual mappings of selected words reflecting psychological processes, which we believe are central to capturing the socio-linguistic characteristics. Our contribution in this paper is two-fold: (a) We present an effective approach (SocVec) to mine cross-cultural similarities and differences of terms, which could benefit research in machine translation, cross-cultural social media analysis, and other cross-lingual research in natural language processing and computational social science. (b) We propose two novel and important tasks in cross-cultural social studies and social media analysis. Experimental results on our annotated datasets show that the proposed method outperforms many strong baseline methods. 2 The SocVec Framework In this section, we first discuss the intuition behind our model, the concept of “social words” and our notations. Then, we present the overall workflow of our approach. We finally describe the SocVec framework in detail. 2.1 Problem Statement We choose (English, Chinese) to be the target language pair throughout this paper for the salient cross-cultural differences between the east and the west1. Given an English term W and a Chinese term U, the core research question is how to compute a similarity score, ccsim(W, U), to represent the cross-cultural similarities between them. We cannot directly calculate the similarity between the monolingual word vectors of W and U, because they are trained separately and the semantics of dimension are not aligned. Thus, the challenge is to devise a way to compute similarities across two different vector spaces while retaining their respective cultural characteristics. A very intuitive solution is to firstly translate the Chinese term U to its English counterpart U ′ through a Chinese-English bilingual lexicon, and then regard ccsim(W, U) as the (cosine) similarity between W and U ′ with their monolingual word embeddings. However, this solution is not promising in some common cases for three reasons: (a) if U is an OOV (Out of Vocabulary) term, e.g., a novel slang term, then there is probably no translation U ′ in bilingual lexicons. (b) if W and U are names referring to the same named entity, then we have U ′ = W. Therefore, ccsim(W, U) is just the similarity between W and itself, and we cannot capture any cross-cultural differences with this method. (c) this approach does not explicitly preserve the cultural and social contexts of the terms. To overcome the above problems, our intuition is to project both English and Chinese word vectors into a single third space, known as SocVec, and the projection is supposed to purposely carry cultural features of terms. 2.2 Social Words and Our Notations Some research in psychology and sociology (Kitayama et al., 2000; Gareis and Wilkins, 2011) 1Nevertheless, the techniques are language independent and thus can be utilized for any language pairs so long as the necessary resources outlined in Section 2.3 are available. 711 ESV !!"#$(&, ()   ( … ESV tmrw adore & … *+ ,… … … ● … ● Bilingual Social Lexicon (BSL) .-: .+: Figure 2: Workflow for computing the crosscultural similarity between an English word W and a Chinese word U, denoted by ccsim(W, U) show that culture can be highly related to emotions and opinions people express in their discussions. As suggested by Tausczik and Pennebaker (2009), we thus define the concept of “social word” as the words directly reflecting opinion, sentiment, cognition and other human psychological processes2, which are important to capturing cultural and social characteristics. Both Elahi and Monachesi (2012) and Garimella et al. (2016a) find such social words are most effective culture/socio-linguistic features in identifying cross-cultural differences. We use these notations throughout the paper: CnVec and EnVec denote the Chinese and English word vector space, respectively; CSV and ESV denote the Chinese and English social word vocab; BL means Bilingual Lexicon, and BSL is short for Bilingual Social Lexicon; finally, we use Ex, Cx and Sx to denote the word vectors of the word x in EnVec, CnVec and SocVec spaces respectively. 2.3 Overall Workflow Figure 2 shows the workflow of our framework to construct the SocVec and compute ccsim(W, U). Our proposed SocVec model attacks the problem with the help of three low-cost external resources: (i) an English corpus and a Chinese corpus from social media; (ii) an English-to-Chinese bilingual lexicon (BL); (iii) an English social word vocabulary (ESV) and a Chinese one (CSV). We train English and Chinese word embeddings (EnVec and CnVec) on the English and Chinese social media corpus respectively. Then, we build a BSL from the CSV, ESV and BL (see Section 2.4). The BSL further maps the previously incompati2Example social words in English include fawn, inept, tremendous, gratitude, terror, terrific, loving, traumatic, etc. We discuss the sources of such social words in Section 3. ble EnVec and CnVec into a single common vector space SocVec, where two new vectors, SW for W and SU for U, are finally comparable. 2.4 Building the BSL The process of building the BSL is illustrated in Figure 3. We first extract our bilingual lexicon (BL), where confidence score wi represents the probability distribution on the multiple translations for each word. Afterwards, we use BL to translate each social word in the ESV to a set of Chinese words and then filter out all the words that are not in the CSV. Now, we have a set of Chinese social words for each English social word, which is denoted by a “translation set”. The final step is to generate a Chinese “pseudo-word” for each English social word using their corresponding translation sets. A “pseudo-word” can be either a real word that is the most representative word in the translation set, or an imaginary word whose vector is a certain combination of the vectors of the words in the translation set. For example, in Figure 3, the English social word “fawn” has three Chinese translations in the bilingual lexicon, but only two of them (underlined) are in the CSV. Thus, we only keep these two in the translation set in the filtered bilingual lexicon. The pseudo-word generator takes the word vectors of the two words (in the black box), namely 奉承(flatter) and 谄媚(toady), as input, and generates the pseudo-word vector denoted by “fawn*”. Note that the direction of building BSL can also be from Chinese to English, in the same manner. However, we find that the current direction gives better results due to the better translation quality of our BL in this direction. Given an English social word, we denote ti as the ith Chinese word of its translation set consisting of N social words. We design four intuitive types of pseudo-word generator as follows, which are tested in the experiments: (1) Max. Maximum of the values in each dimension, assuming dimensionality is K: Pseudo(Ct1, ..., CtN) =   max(C(1) t1 , ..., C(1) tN ) ... max(C(K) t1 , ..., C(K) tN )   T (2) Avg. Average of the values in every dimension: Pseudo(Ct1, ..., CtN) = 1 N N X i Cti 712 Bilingual Lexicon (BL) inept (incompetence)/0.7 (clumsy)/0.3 terror (horror)/0.6(fear)/0.4 fawn (flatter)/0.4 (toady)/0.4 (lamb)/0.3 … … inept terror fawn … English Social Vocab Filtered Bilingual Lexicon inept (incompetence)/0.7 (clumsy)/0.3 terror (horror)/0.6(fear)/0.4 fawn (flatter)/0.4 (toady)/0.4 … … inept: inept*: terror: terror*: fawn: fawn*: … … Bilingual Social Lexicon (BSL) Pseudo-word Generator “translation set” “confidence” Chinese Social Vocab …      Figure 3: Generating an entry in the BSL for “fawn” and its pseudo-word “fawn*” (3) WAvg. Weighted average value of every dimension with respect to the translation confidence: Pseudo(Ct1, ..., CtN) = 1 N N X i wiCti (4) Top. The most confident translation: Pseudo(Ct1, ..., CtN) = Ctk, k = argmax i wi Finally, the BSL contains a set of EnglishChinese word vector pairs, where each entry represents an English social word and its Chinese pseudo-word based on its “translation set”. 2.5 Constructing the SocVec Space Let Bi denote the English word of the ith entry of the BSL, and its corresponding Chinese pseudoword is denoted by B∗ i . We can project the English word vector EW into the SocVec space by computing the cosine similarities between EW and each English word vector in BSL as values on SocVec dimensions, effectively constructing a new vector SW of size L. Similarly, we map a Chinese word vector CU to be a new vector SU. SW and SU belong to the same vector space SocVec and are comparable. The following equation illustrates the projection, and how to compute ccsim3. ccsim(W, U) := f(EW, CU) = sim      cos(EW, EB1) ... cos(EW, EBL)   T ,   cos(CU, CB∗ 1) ... cos(CU, CB∗ L)   T   = sim(SW, SU) For example, if W is “Nagoya” and U is “名古 屋”, we compute the cosine similarities between “Nagoya” and each English social word in the BSL with their monolingual word embeddings in English. Such similarities compose Snagoya. Similarly, we compute the cosine similarities between 3The function sim is a generic similarity function, for which several metrics are considered in experiments. “名古屋” and each Chinese pseudo-word, and compose the social word vector S名古屋. In other words, for each culture/language, the new word vectors like SW are constructed based on the monolingual similarities of each word to the vectors of a set of task-related words (“social words” in our case). This is also a significant part of the novelty of our transformation method. 3 Experimental Setup Prior to evaluating SocVec with our two proposed tasks in Section 4 and Section 5, we present our preparation steps as follows. Social Media Corpora Our English Twitter corpus is obtained from Archive Team’s Twitter stream grab4. The Chinese Weibo corpus comes from Open Weiboscope Data Access5 (Fu et al., 2013). Both corpora cover the whole year of 2012. We then randomly down-sample each corpus to 100 million messages where each message contains at least 10 characters, normalize the text (Han et al., 2012), lemmatize the text (Manning et al., 2014) and use LTP (Che et al., 2010) to perform word segmentation for the Chinese corpus. Entity Linking and Word Embedding Entity linking is a preprocessing step which links various entity mentions (surface forms) to the identity of corresponding entities. For the Twitter corpus, we use Wikifier (Ratinov et al., 2011; Cheng and Roth, 2013), a widely used entity linker in English. Because no sophisticated tool for Chinese short text is available, we implement our own tool that is greedy for high precision. We train English and Chinese monolingual word embedding respectively using word2vec’s skip-gram method with a window size of 5 (Mikolov et al., 2013b). Bilingual Lexicon Our bilingual lexicon is collected from Microsoft Translator6, which translates English words to multiple Chinese words 4https://archive.org/details/twitterstream 5http://weiboscope.jmsc.hku.hk/datazip/ 6http://www.bing.com/translator/api/Dictionary/ Lookup?from=en&to=zh-CHS&text=<input_word> 713 with confidence scores. Note that all named entities and slang terms used in the following experiments are excluded from this bilingual lexicon. Social Word Vocabulary Our social word vocabularies come from Empath (Fast et al., 2016) and OpinionFinder (Choi et al., 2005) for English, and TextMind (Gao et al., 2013) for Chinese. Empath is similar to LIWC (Tausczik and Pennebaker, 2009), but has more words and more categories and is publicly available. We manually select 91 categories of words that are relevant to human perception and psychological processes following Garimella et al. (2016a). OpinionFinder consists of words relevant to opinions and sentiments, and TextMind is a Chinese counterpart for Empath. In summary, we obtain 3,343 words from Empath, 3,861 words from OpinionFinder, and 5,574 unique social words in total. 4 Task 1: Mining cross-cultural differences of named entities Task definition: This task is to discover and quantify cross-cultural differences of concerns towards named entities. Specifically, the input in this task is a list of 700 named entities of interest and two monolingual social media corpora; the output is the scores for the 700 entities indicating the crosscultural differences of the concerns towards them between two corpora. The ground truth is from the labels collected from human annotators. 4.1 Ground Truth Scores Harris (1954) states that the meaning of words is evidenced by the contexts they occur with. Likewise, we assume that the cultural properties of an entity can be captured by the terms they always co-occur within a large social media corpus. Thus, for each of randomly selected 700 named entities, we present human annotators with two lists of 20 most co-occurred terms within Twitter and Weibo corpus respectively. Our annotators are instructed to rate the topicrelatedness between the two word lists using one of following labels: “very different”, “different”, “hard to say”, “similar” and “very similar”. We do this for efficiency and avoiding subjectivity. As the word lists presented come from social media messages, the social and cultural elements are already embedded in their chances of occurrence. All four annotators are native Chinese speakers but have excellent command of English and lived in the US extensively, and they are trained with many selected examples to form shared understanding of the labels. The inter-annotator agreement is 0.67 by Cohen’s kappa coefficient, suggesting substantial correlation (Landis and Koch, 1977). 4.2 Baseline and Our Methods We propose eight baseline methods for this novel task: distribution-based methods (BL-JS, E-BLJS, and WN-WUP) compute cross-lingual relatedness between two lists of the words surrounding the input English and Chinese terms respectively (LE and LC); transformation-based methods (LTrans and BLex) compute the vector representation in English and Chinese corpus respectively, and then train a transformation; MCCA, MCluster and Duong are three typical bilingual word representation models for computing general cross-lingual word similarities. The LE and LC in the BL-JS and WN-WUP methods are the same as the lists that annotators judge. BL-JS (Bilingual Lexicon Jaccard Similarity) uses the bilingual lexicon to translate LE to a Chinese word list L∗ E as a medium, and then calculates the Jaccard Similarity between L∗ E and LC as JEC. Similarly, we compute JCE. Finally, we regard (JEC + JCE)/2 as the score of this named entity. E-BL-JS (Embedding-based Jaccard Similarity) differs from BL-JS in that it instead compares the two lists of words gathered from the rankings of word embedding similarities between the name of entities and all English words and Chinese words respectively. WN-WUP (WordNet Wu-Palmer Similarity) uses Open Multilingual Wordnet (Wang and Bond, 2013) to compute the average similarities over all English-Chinese word pairs constructed from the LE and LC. We follow the steps of Mikolov et al. (2013a) to train a linear transformation (LTrans) matrix between EnVec and CnVec, using 3,000 translation pairs with maximum confidences in the bilingual lexicon. Given a named entity, this solution simply calculates the cosine similarity between the vector of its English name and the transformed vector of its Chinese name. BLex (Bilingual Lexicon Space) is similar to our SocVec but it does not use any social word vocabularies but uses bilingual lexicon entries as pivots instead. MCCA (Ammar et al., 2016) takes two trained monolingual word embeddings with a bilingual lexicon as input, and develop a bilingual word em714 Entity Twitter topics Weibo topics Maldives coup, president Nasheed quit, political crisis holiday, travel, honeymoon, paradise, beach Nagoya tour, concert, travel, attractive, Osaka Mayor Takashi Kawamura, Nanjing Massacre, denial of history Quebec Conservative Party, Liberal Party, politicians, prime minister, power failure travel, autumn, maples, study abroad, immigration, independence Philippines gunman attack, police, quake, tsunami South China Sea, sovereignty dispute, confrontation, protest Yao Ming NBA, Chinese, good player, Asian patriotism, collective values, Jeremy Lin, Liu Xiang, Chinese Law maker, gold medal superstar USC college football, baseball, Stanford, Alabama, win, lose top destination for overseas education, Chinese student murdered, scholars, economics, Sino American politics Table 1: Selected culturally different entities with summarized Twitter and Weibo’s trending topics bedding space. It is extended from the work of Faruqui and Dyer (2014), which performs slightly worse in the experiments. MCluster (Ammar et al., 2016) requires re-training the bilingual word embeddings from the two mono-lingual corpora with a bilingual lexicon. Similarly, Duong (Duong et al., 2016) retrains the embeddings from monolingual corpora with an EM-like training algorithm. We also use our BSL as the bilingual lexicon in these methods to investigate its effectiveness and generalizability. The dimensionality is tuned from {50, 100, 150, 200} in all these bilingual word embedding methods. With our constructed SocVec space, given a named entity with its English and Chinese names, we can simply compute the similarity between their SocVecs as its cross-cultural difference score. Our method is based on monolingual word embeddings and a BSL, and thus does not need the timeconsuming re-training on the corpora. 4.3 Experimental Results For qualitative evaluation, Table 1 shows some of the most culturally different entities mined by the SocVec method. The hot and trendy topics on Twitter and Weibo are manually summarized to help explain the cross-cultural differences. The perception of these entities diverges widely between English and Chinese social media, thus suggesting significant cross-cultural differences. Note that some cultural differences are time-specific. We believe such temporal variations of cultural differences can be valuable and beneficial for social studies as well. Investigating temporal factors of cross-cultural differences in social media can be an interesting future research topic in this task. In Table 2, we evaluate the benchmark methods and our approach with three metrics: Spearman and Pearson, where correlation is computed beMethod Spearman Pearson MAP BL-JS 0.276 0.265 0.644 WN-WUP 0.335 0.349 0.677 E-BL-JS 0.221 0.210 0.571 LTrans 0.366 0.385 0.644 BLex 0.596 0.595 0.765 MCCA-BL(100d) 0.325 0.343 0.651 MCCA-BSL(150d) 0.357 0.376 0.671 MCluster-BL(100d) 0.365 0.388 0.693 MCluster-BSL(100d) 0.391 0.425 0.713 Duong-BL(100d) 0.618 0.627 0.785 Duong-BSL(100d) 0.625 0.631 0.791 SocVec:opn 0.668 0.662 0.834 SocVec:all 0.676 0.671 0.834 SocVec:noun 0.564 0.562 0.756 SocVec:verb 0.615 0.618 0.779 SocVec:adj. 0.636 0.639 0.800 Table 2: Comparison of Different Methods tween truth averaged scores (quantifying the labels from 1.0 to 5.0) and computed cultural difference scores from different methods; Mean Average Precision (MAP), which converts averaged scores as binary labels, by setting 3.0 as the threshold. The SocVec:opn considers only OpinionFinder as the ESV, while SocVec:all uses the union of Empath and OpinionFinder vocabularies7. Lexicon Ablation Test. To show the effectiveness of social words versus other type of words as the bridge between the two cultures, we also compare the results using sets of nouns (SocVec:noun), verbs (SocVec:verb) and adjectives (SocVec:adj.). All vocabularies under comparison are of similar sizes (around 5,000), indicating that the improvement of our method is significant. Results show that our SocVec models, and in particular, the SocVec model using the social words as cross-lingual media, performs the best. 7The following tuned parameters are used in SocVec methods: 5-word context window, 150 dimensions monolingual word vectors, cosine similarity as the sim function, and “Top” as the pseudo-word generator. 715 Similarity Spearman Pearson MAP PCorr. 0.631 0.625 0.806 L1 + M 0.666 0.656 0.824 Cos 0.676 0.669 0.834 L2 + E 0.676 0.671 0.834 Table 3: Different Similarity Functions Generator Spearman Pearson MAP Max. 0.413 0.401 0.726 Avg. 0.667 0.625 0.831 W.Avg. 0.671 0.660 0.832 Top 0.676 0.671 0.834 Table 4: Different Pseudo-word Generators Similarity Options. We also evaluate the effectiveness of four different similarity options in SocVec, namely, Pearson Correlation Coefficient (PCorr.), L1-normalized Manhattan distance (L1+M), Cosine Similarity (Cos) and L2normalized Euclidean distance (L2+E). From Table 3, we conclude that among these four options, Cos and L2+E perform the best. Pseudo-word Generators. Table 4 shows effect of using four pseudo-word generator functions, from which we can infer that “Top” generator function performs best for it reduces some noisy translation pairs. 5 Task 2: Finding most similar words for slang across languages Task Description: This task is to find the most similar English words of a given Chinese slang term in terms of its slang meanings and sentiment, and vice versa. The input is a list of English/Chinese slang terms of interest and two monolingual social media corpora; the output is a list of Chinese/English word sets corresponding to each input slang term. Simply put, for each given slang term, we want to find a set of the words in a different language that are most similar to itself and thus can help people understand it across languages. We propose Average Cosine Similarity (Section 5.3) to evaluate a method’s performance with the ground truth (presented below). 5.1 Ground Truth Slang Terms. We collect the Chinese slang terms from an online Chinese slang glossary8 consisting of 200 popular slang terms with English explanations. For English, we resort to a slang word 8https://www.chinasmack.com/glossary Gg Bi Bd CC LT 18.24 16.38 17.11 17.38 9.14 TransBL MCCA MCluster Duong SV 18.13 17.29 17.47 20.92 23.01 (a) Chinese Slang to English Gg Bi Bd LT TransBL 6.40 15.96 15.44 7.32 11.43 MCCA MCluster Duong SV 15.29 14.97 15.13 17.31 (b) English Slang to Chinese Table 5: ACS Sum Results of Slang Translation list from OnlineSlangDictionary9 with explanations and downsample the list to 200 terms. Truth Sets. For each Chinese slang term, its truth set is a set of words extracted from its English explanation. For example, we construct the truth set of the Chinese slang term “二百五” by manually extracting significant words about its slang meanings (bold) in the glossary: 二 二 二百 百 百五 五 五: A foolish person who is lacking in sense but still stubborn, rude, and impetuous. Similarly, for each English slang term, its Chinese word sets are the translation of the words hand picked from its English explanation. 5.2 Baseline and Our Methods We propose two types of baseline methods for this task. The first is based on well-known online translators, namely Google (Gg), Bing (Bi) and Baidu (Bd). Note that experiments using them are done in August, 2017. Another baseline method for Chinese is CC-CEDICT10 (CC), an online public Chinese-English dictionary, which is constantly updated for popular slang terms. Considering situations where many slang terms have literal meanings, it may be unfair to retrieve target terms from such machine translators by solely inputing slang terms without specific contexts. Thus, we utilize example sentences of their slang meanings from some websites (mainly from Urban Dictionary11). The following example shows how we obtain the target translation terms for the slang word “fruitcake” (an insane person): Input sentence: Oh man, you don’t want to date that girl. She’s always drunk and yelling. She is a total fruitcake.12 9http://onlineslangdictionary.com/word-list/ 10https://cc-cedict.org/wiki/ 11http://www.urbandictionary.com/ 12http://www.englishbaby.com/lessons/4349/slang/ fruitcake 716 Slang Explanation Google Bing Baidu Ours 浮云 something as ephemeral and unimportant as “passing clouds” clouds nothing floating clouds nothingness, illusion 水军 “water army”, people paid to slander competitors on the Internet and to help shape public opinion Water army Navy Navy propaganda, complicit, fraudulent floozy a woman with a reputation for promiscuity N/A 劣根性 (depravity) 荡妇(slut) 骚货(slut),妖 精(promiscuous) fruitcake a crazy person, someone who is completely insane 水果蛋糕 (fruit cake) 水果蛋糕 (fruit cake) 水果蛋糕 (fruit cake) 怪诞(bizarre),厌 烦(annoying) Table 6: Bidirectional Slang Translation Examples Produced by SocVec Google Translation: 哦, 男人, 你不想约会那个女 孩。她总是喝醉了, 大喊大叫。她是一个水 水 水果 果 果蛋 蛋 蛋糕 糕 糕。 Another lines of baseline methods is scoringbased. The basic idea is to score all words in our bilingual lexicon and consider the top K words as the target terms. Given a source term to be translated, the Linear Transform (LT), MCCA, MCluster and Duong methods score the candidate target terms by computing cosine similarities in their constructed bilingual vector space (with the tuned best settings in previous evaluation). A more sophisticated baseline (TransBL) leverages the bilingual lexicon: for each candidate target term w in the target language, we first obtain its translations Tw back into the source language and then calculate the average word similarities between the source term and the translations Tw as w’s score. Our SocVec-based method (SV) is also scoringbased. It simply calculates the cosine similarities between the source term and each candidate target term within SocVec space as their scores. 5.3 Experimental Results To quantitatively evaluate our methods, we need to measure similarities between a produced word set and the ground truth set. Exact-matching Jaccard similarity is too strict to capture valuable relatedness between two word sets. We argue that average cosine similarity (ACS) between two sets of word vectors is a better metric for evaluating the similarity between two word sets. ACS(A, B) = 1 |A||B| |A| X i=1 |B| X j=1 Ai · Bj ∥Ai∥∥Bj∥ The above equation illustrates such computation, where A and B are the two word sets: A is the truth set and B is a similar list produced by each method. In the previous case of “二百五” (Section 5.1), A is {foolish, stubborn, rude, impetuous} while B can be {imbecile, brainless, scumChinese Slang English Slang Explanation 萌 adorbz, adorb, adorbs, tweeny, attractiveee cute, adorable 二百五 shithead, stupidit, douchbag A foolish person 鸭梨 antsy, stressy, fidgety, grouchy, badmood stress, pressure, burden Table 7: Slang-to-Slang Translation Examples bag, imposter}. Ai and Bj denote the word vector of the ith word in A and jth word in B respectively. The embeddings used in ACS computations are pre-trained GloVe word vectors13 and thus the computation is fair among different methods. Experimental results of Chinese and English slang translation in terms of the sum of ACS over 200 terms are shown in Table 5. The performance of online translators for slang typically depends on human-set rules and supervised learning on well-annotated parallel corpora, which are rare and costly, especially for social media where slang emerges the most. This is probably the reason why they do not perform well. The Linear Transformation (LT) model is trained on highly confident translation pairs in the bilingual lexicon, which lacks OOV slang terms and social contexts around them. The TransBL method is competitive because its similarity computations are within monolingual semantic spaces and it makes great use of the bilingual lexicon, but it loses the information from the related words that are not in the bilingual lexicon. Our method (SV) outperforms baselines by directly using the distances in the SocVec space, which proves that the SocVec well captures the cross-cultural similarities between terms. To qualitatively evaluate our model, in Table 6, we present several examples of our translations for Chinese and English slang terms as well as their 13https://nlp.stanford.edu/projects/glove/ 717 explanations from the glossary. Our results are highly correlated with these explanations and capture their significant semantics, whereas most online translators just offer literal translations, even within obviously slang contexts. We take a step further to directly translate Chinese slang terms to English slang terms by filtering out ordinary (nonslang) words in the original target term lists, with examples shown in Table 7. 6 Related Work Although social media messages have been essential resources for research in computational social science, most works based on them only focus on a single culture and language (Petrovic et al., 2010; Paul and Dredze, 2011; Rosenthal and McKeown, 2015; Wang and Yang, 2015; Zhang et al., 2015; Lin et al., 2017). Cross-cultural studies have been conducted on the basis of a questionnaire-based approach for many years. There are only a few of such studies using NLP techniques. Nakasaki et al. (2009) present a framework to visualize the cross-cultural differences in concerns in multilingual blogs collected with a topic keyword. Elahi and Monachesi (2012) show that cross-cultural analysis through language in social media data is effective, especially using emotion terms as culture features, but the work is restricted in monolingual analysis and a single domain (love and relationship). Garimella et al. (2016a) investigate the cross-cultural differences in word usages between Australian and American English through their proposed “socio-linguistic features” (similar to our social words) in a supervised way. With the data of social network structures and user interactions, Garimella et al. (2016b) study how to quantify the controversy of topics within a culture and language. Guti´errez et al. (2016) propose an approach to detect differences of word usage in the cross-lingual topics of multilingual topic modeling results. To the best of our knowledge, our work for Task 1 is among the first to mine and quantify the cross-cultural differences in concerns about named entities across different languages. Existing research on slang mainly focuses on automatic discovering of slang terms (Elsahar and Elbeltagy, 2014) and normalization of noisy texts (Han et al., 2012) as well as slang formation. Ni and Wang (2017) are among the first to propose an automatic supervised framework to monolingually explain slang terms using external resources. However, research on automatic translation or cross-lingually explanation for slang terms is missing from the literature. Our work in Task 2 fills the gap by computing cross-cultural similarities with our bilingual word representations (SocVec) in an unsupervised way. We believe this application is useful in machine translation for social media (Ling et al., 2013). Many existing cross-lingual word embedding models rely on expensive parallel corpora with word or sentence alignments (Klementiev et al., 2012; Koˇcisk´y et al., 2014). These works often aim to improve the performance on monolingual tasks and cross-lingual model transfer for document classification, which does not require crosscultural signals. We position our work in a broader context of “monolingual mapping” based crosslingual word embedding models in the survey of Ruder et al. (2017). The SocVec uses only lexicon resource and maps monolingual vector spaces into a common high-dimensional third space by incorporating social words as pivot, where orthogonality is approximated by setting clear meaning to each dimension of the SocVec space. 7 Conclusion We present the SocVec method to compute crosscultural differences and similarities, and evaluate it on two novel tasks about mining cross-cultural differences in named entities and computing crosscultural similarities in slang terms. Through extensive experiments, we demonstrate that the proposed lightweight yet effective method outperforms a number of baselines, and can be useful in translation applications and cross-cultural studies in computational social science. Future directions include: 1) mining cross-cultural differences in general concepts other than names and slang, 2) merging the mined knowledge into existing knowledge bases, and 3) applying the SocVec in downstream tasks like machine translation.14 Acknowledgment Kenny Zhu is the contact author and was supported by NSFC grants 91646205 and 61373031. Seung-won Hwang was supported by Microsoft Research Asia. Thanks to the anonymous reviewers and Hanyuan Shi for their valuable feedback. 14We will make our code and data available at https: //github.com/adapt-sjtu/socvec. 718 References Waleed Ammar, George Mulcaire, Yulia Tsvetkov, Guillaume Lample, Chris Dyer, and Noah A Smith. 2016. Massively multilingual word embeddings. arXiv preprint arXiv:1602.01925. Jos´e Camacho-Collados, Mohammad Taher Pilehvar, Nigel Collier, and Roberto Navigli. 2017. Semeval2017 task 2: Multilingual and cross-lingual semantic word similarity. In Proc. of SemEval@ACL. Wanxiang Che, Zhenghua Li, and Ting Liu. 2010. Ltp: A chinese language technology platform. In Proc. of COLING 2010: Demonstrations. Xiao Cheng and Dan Roth. 2013. Relational inference for wikification. In Proc. of EMNLP. Yejin Choi, Claire Cardie, Ellen Riloff, and Siddharth Patwardhan. 2005. Identifying sources of opinions with conditional random fields and extraction patterns. In Proc. of HLT-EMNLP. Long Duong, Hiroshi Kanayama, Tengfei Ma, Steven Bird, and Trevor Cohn. 2016. Learning crosslingual word embeddings without bilingual corpora. In Proc. of EMNLP. Mohammad Fazleh Elahi and Paola Monachesi. 2012. An examination of cross-cultural similarities and differences from social media data with respect to language use. In Proc. of LREC. Hady Elsahar and Samhaa R Elbeltagy. 2014. A fully automated approach for arabic slang lexicon extraction from microblogs. In Proc. of CICLing. Manaal Faruqui and Chris Dyer. 2014. Improving vector space word representations using multilingual correlation. In Proc. of EACL. Ethan Fast, Binbin Chen, and Michael S Bernstein. 2016. Empath: Understanding topic signals in largescale text. In Proc. of CHI. King-wa Fu, Chung-hong Chan, and Michael Chau. 2013. Assessing censorship on microblogs in china: Discriminatory keyword analysis and the real-name registration policy. IEEE Internet Computing, 17(3):42–50. Rui Gao, Bibo Hao, He Li, Yusong Gao, and Tingshao Zhu. 2013. Developing simplified chinese psychological linguistic analysis dictionary for microblog. In Proceedings of International Conference on Brain and Health Informatics. Springer. Elisabeth Gareis and Richard Wilkins. 2011. Love expression in the united states and germany. International Journal of Intercultural Relations, 35(3):307– 319. Aparna Garimella, Rada Mihalcea, and James W. Pennebaker. 2016a. Identifying cross-cultural differences in word usage. In Proc. of COLING. Kiran Garimella, Gianmarco De Francisci Morales, Aristides Gionis, and Michael Mathioudakis. 2016b. Quantifying controversy in social media. In Proc. of WSDM. E. Dario Guti´errez, Ekaterina Shutova, Patricia Lichtenstein, Gerard de Melo, and Luca Gilardi. 2016. Detecting cross-cultural differences using a multilingual topic model. TACL, 4:47–60. Bo Han, Paul Cook, and Timothy Baldwin. 2012. Automatically constructing a normalisation dictionary for microblogs. In Proc. of EMNLP-CoNLL. Zellig S Harris. 1954. Distributional structure. Word, 10(2-3):146–162. Shinobu Kitayama, Hazel Rose Markus, and Masaru Kurokawa. 2000. Culture, emotion, and well-being: Good feelings in japan and the united states. Cognition & Emotion, 14(1):93–124. Alexandre Klementiev, Ivan Titov, and Binod Bhattarai. 2012. Inducing crosslingual distributed representations of words. In Proc. of COLING. Tom´aˇs Koˇcisk´y, Karl Moritz Hermann, and Phil Blunsom. 2014. Learning bilingual word representations by marginalizing alignments. In Proc. of ACL. J. Richard Landis and Gary G. Koch. 1977. The measurement of observer agreement for categorical data. Biometrics, 33 1:159–74. Quoc V. Le and Tomas Mikolov. 2014. Distributed representations of sentences and documents. In Proc. of ICML. Bill Y. Lin, Frank F. Xu, Zhiyi Luo, and Kenny Q. Zhu. 2017. Multi-channel bilstm-crf model for emerging named entity recognition in social media. In Proc. of W-NUT@EMNLP. Wang Ling, Guang Xiang, Chris Dyer, Alan Black, and Isabel Trancoso. 2013. Microblogs as parallel corpora. In Proc. of ACL. Christopher D Manning, Mihai Surdeanu, John Bauer, Jenny Rose Finkel, Steven Bethard, and David McClosky. 2014. The stanford corenlp natural language processing toolkit. In Proc. of ACL (System Demonstrations). Tomas Mikolov, Quoc V Le, and Ilya Sutskever. 2013a. Exploiting similarities among languages for machine translation. arXiv preprint arXiv:1309.4168. Tomas Mikolov, Ilya Sutskever, Kai Chen, Greg S Corrado, and Jeff Dean. 2013b. Distributed representations of words and phrases and their compositionality. In Proc. of NIPS. Hiroyuki Nakasaki, Mariko Kawaba, Sayuri Yamazaki, Takehito Utsuro, and Tomohiro Fukuhara. 2009. Visualizing cross-lingual/cross-cultural differences in concerns in multilingual blogs. In Proc. of ICWSM. 719 Ke Ni and William Yang Wang. 2017. Learning to explain non-standard english words and phrases. In Proc. of IJCNLP. Michael J. Paul and Mark Dredze. 2011. You are what you tweet: Analyzing twitter for public health. In Proc. of ICWSM. Jeffrey Pennington, Richard Socher, and Christopher D Manning. 2014. Glove: Global vectors for word representation. In Proc. of EMNLP. Sasa Petrovic, Miles Osborne, and Victor Lavrenko. 2010. Streaming first story detection with application to twitter. In Proc. of HLT-NAACL. Lev Ratinov, Dan Roth, Doug Downey, and Mike Anderson. 2011. Local and global algorithms for disambiguation to wikipedia. In Proc. of ACL. Sara Rosenthal and Kathy McKeown. 2015. I couldn’t agree more: The role of conversational structure in agreement and disagreement detection in online discussions. In Proc. of SIGDIAL. Sebastian Ruder, Ivan Vuli´c, and Anders Søgaard. 2017. A survey of cross-lingual embedding models. arXiv preprint arXiv:1706.04902. Chandar A P Sarath, Stanislas Lauly, Hugo Larochelle, Mitesh M. Khapra, Balaraman Ravindran, Vikas Raykar, and Amrita Saha. 2014. An autoencoder approach to learning bilingual word representations. In Proc. in NIPS. Yla R. Tausczik and James W. Pennebaker. 2009. The psychological meaning of words: LIWC and computerized text analysis methods. Journal of Language and Social Psychology, 29(1):24–54. Shyam Upadhyay, Manaal Faruqui, Chris Dyer, and Dan Roth. 2016. Cross-lingual models of word embeddings: An empirical comparison. In Proc. of ACL. Shan Wang and Francis Bond. 2013. Building the chinese open wordnet (cow): Starting from core synsets. In Proceedings of the 11th Workshop on Asian Language Resources, a Workshop at IJCNLP. William Yang Wang and Diyi Yang. 2015. That’s so annoying!!!: A lexical and frame-semantic embedding based data augmentation approach to automatic categorization of annoying behaviors using #petpeeve tweets. In Proc. of EMNLP. Boliang Zhang, Hongzhao Huang, Xiaoman Pan, Sujian Li, Chin-Yew Lin, Heng Ji, Kevin Knight, Zhen Wen, Yizhou Sun, Jiawei Han, and B¨ulent Yener. 2015. Context-aware entity morph decoding. In Proc. of ACL.
2018
66
Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Long Papers), pages 720–730 Melbourne, Australia, July 15 - 20, 2018. c⃝2018 Association for Computational Linguistics 720 Classification of Moral Foundations in Microblog Political Discourse Kristen Johnson and Dan Goldwasser Department of Computer Science Purdue University, West Lafayette, IN 47907 {john1187, dgoldwas}@purdue.edu Abstract Previous works in computer science, as well as political and social science, have shown correlation in text between political ideologies and the moral foundations expressed within that text. Additional work has shown that policy frames, which are used by politicians to bias the public towards their stance on an issue, are also correlated with political ideology. Based on these associations, this work takes a first step towards modeling both the language and how politicians frame issues on Twitter, in order to predict the moral foundations that are used by politicians to express their stances on issues. The contributions of this work includes a dataset annotated for the moral foundations, annotation guidelines, and probabilistic graphical models which show the usefulness of jointly modeling abstract political slogans, as opposed to the unigrams of previous works, with policy frames for the prediction of the morality underlying political tweets. 1 Introduction Social media microblogging platforms, specifically Twitter, have become highly influential and relevant to current political events. Such platforms allow politicians to communicate with the public as events are unfolding and shape public discourse on various issues. Furthermore, politicians are able to express their stances on issues and by selectively using certain political slogans, reveal their underlying political ideologies and moral views on an issue. Previous works in political and social science have shown a correlation between political ideology, stances on political issues, and the moral convictions used to justify these stances (Graham et al., 2009). For example, Figure 1 presents a tweet, by a prominent member of the U.S. Congress, which expresses concern We are permitting the incarceration and shooting of thousands of black and brown boys in their formative years. Figure 1: Example Tweet Highlighting Classification Difficulty. about the fate of young individuals (i.e., incarceration, shooting), specifically for vulnerable members of minority groups. The Moral Foundations Theory (MFT) (Haidt and Joseph, 2004; Haidt and Graham, 2007) provides a theoretical framework for explaining these nuanced distinctions. The theory suggests that there are five basic moral values which underlie human moral perspectives, emerging from evolutionary, social, and cultural origins. These are referred to as the moral foundations (MF) and include Care/Harm, Fairness/Cheating, Loyalty/Betrayal, Authority/Subversion, and Purity/Degradation (Table 1 provides a more detailed explanation). The above example reflects the moral foundations that shape the author’s perspective on the issue: Harm and Cheating. Traditionally, analyzing text based on the MFT has relied on the use of a lexical resource, the Moral Foundations Dictionary (MFD) (Haidt and Graham, 2007; Graham et al., 2009). The MFD, similar to LIWC (Pennebaker et al., 2001; Tausczik and Pennebaker, 2010), associates a list of related words with each one of the moral foundations. Therefore, analyzing text equates to counting the number of occurrences of words in the text which also match the words in the MFD. Given the highly abstract and generalized nature of the moral foundations, this approach often falls short of dealing with the highly ambiguous text 721 politicians use to express their perspectives on specific issues. The following tweet, by another prominent member of the U.S. Congress, reflects the author’s use of both the Harm and Cheating moral foundations. 30k Americans die to gun violence. Still, I'm moving to North Carolina where it's safe to go to the bathroom. Figure 2: Example Tweet Highlighting Classification Difficulty. While the first foundation (Harm) can be directly identified using a word match to the MFD (as shown in red), the second foundation requires first identifying the sarcastic expression referring to LGBTQ rights and then using extensive world knowledge to determine the appropriate moral foundation. 1 Relying on a match of safe to the MFD would indicate the Care MF is being used instead of the Cheating foundation. In this paper, we aim to solve this challenge by suggesting a data-driven approach to moral foundation identification in tweets. Previous work (Garten et al., 2016) has looked at classification-based approaches over tweets specifically related to Hurricane Sandy, augmenting the textual content with background knowledge using entity linking (Lin et al., 2017). Different from this and similar works, we look at the tweets of U.S. politicians over a long period of time, discussing a large number of events, and touching on several different political issues. Our approach is guided by the intuition that the abstract moral foundations will manifest differently in text, depending on the specific characteristics of the events discussed in the tweet. As a result, it is necessary to correctly model the relevant contextualizing information. Specifically, we are interested in exploring how political ideology, language, and framing interact to represent morality on Twitter. We examine the interplay of political slogans (for example “repeal and replace” when referring to the Affordable Care Act), and policy framing techniques (Boydstun et al., 2014; Johnson et al., 2017) as features for predicting the underlying moral values which are expressed in politicians’ tweets. Additionally, we identify high-level themes characterizing the 1The tweet refers to legislation proposed in 2016 concerning transgender bathroom access restrictions. main point of the tweet, which allows the model to identify the author’s perspective on specific issues and generalize over the specific wording used (for example, if the tweet mentions Religion or Political Maneuvering). This information is incorporated into global probabilistic models using Probabilistic Soft Logic (PSL), a graphical probabilistic modeling framework (Bach et al., 2013). PSL specifies high level rules over a relational representation of these features, which are compiled into a graphical model called a hinge-loss Markov random field that is used to make the final prediction. Our experiments show the importance of modeling contextualizing information, leading to significant improvements over dictionary driven approaches and purely lexical methods. In summary, this paper makes the following contributions: (1) This work is among the first to explore jointly modeling language and political framing techniques for the classification of moral foundations used in the tweets of U.S. politicians on Twitter. (2) We provide a description of our annotation guidelines and an annotated dataset of 2,050 tweets.2 (3) We suggest computational models which easily adapt to new policy issues, for the classification of the moral foundations present in tweets. 2 Related Works In this paper, we explore how political ideology, language, framing, and morality interact on Twitter. Previous works have studied framing in longer texts, such as congressional speeches and news (Fulgoni et al., 2016; Tsur et al., 2015; Card et al., 2015; Baumer et al., 2015), as well as issue-independent framing on Twitter (Johnson and Goldwasser, 2016; Johnson et al., 2017). Ideology measurement (Iyyer et al., 2014; Bamman and Smith, 2015; Sim et al., 2013; Djemili et al., 2014), political sentiment analysis (Pla and Hurtado, 2014; Bakliwal et al., 2013), and polls based on Twitter political sentiment (Bermingham and Smeaton, 2011; O’Connor et al., 2010; Tumasjan et al., 2010) are also related to the study of framing. The association between Twitter and framing in molding public opinion of events and issues (Burch et al., 2015; Harlow and Johnson, 2011; Meraz and Papacharissi, 2013; Jang and 2The data will be available at http://purduenlp. cs.purdue.edu/projects/twittermorals. 722 MORAL FOUNDATION AND BRIEF DESCRIPTION 1. Care/Harm: Care for others, generosity, compassion, ability to feel pain of others, sensitivity to suffering of others, prohibiting actions that harm others. 2. Fairness/Cheating: Fairness, justice, reciprocity, reciprocal altruism, rights, autonomy, equality, proportionality, prohibiting cheating. 3. Loyalty/Betrayal: Group affiliation and solidarity, virtues of patriotism, self-sacrifice for the group, prohibiting betrayal of one’s group. 4. Authority/Subversion: Fulfilling social roles, submitting to authority, respect for social hierarchy/traditions, leadership, prohibiting rebellion against authority. 5. Purity/Degradation: Associations with the sacred and holy, disgust, contamination, religious notions which guide how to live, prohibiting violating the sacred. 6. Non-moral: Does not fall under any other foundations. Table 1: Brief Descriptions of Moral Foundations. Hart, 2015) has also been studied. The connection between morality and political ideology has been explored in the fields of psychology and sociology (Graham et al., 2009, 2012). Moral foundations were also used to inform downstream tasks, by using the MFD to identify the moral foundations in partisan news sources (Fulgoni et al., 2016), or to construct features for other downstream tasks (Volkova et al., 2017). Several recent works have looked into using data-driven methods that go beyond the MFD to study tweets related to Hurricane Sandy (Garten et al., 2016; Lin et al., 2017). 3 Data Annotation The Moral Foundations Theory (Haidt and Graham, 2007) was proposed by sociologists and psychologists as a way to understand how morality develops, as well as its similarities and differences across cultures. The theory consists of the five moral foundations shown in Table 1. The goal of this work is to classify the tweets of the Congressional Tweets Dataset (Johnson et al., 2017) with the moral foundation implied in the tweet. We first attempted to use Amazon Mechanical Turk for annotation, but found that most Mechanical Turkers would choose the Care/Harm or Fairness/Cheating label a majority of the time. Additionally, annotators preferred choosing first the foundation branch (i.e., Care/Harm) and then its sentiment (positive or negative) as opposed to the choice of each foundation separately, i.e., given the choice between Harm or Care/Harm and Negative, annotators preferred the latter. Based on these observations, two annotators, one liberal and one conservative (self-reported), manually annotated a subset of tweets. This subset had an inter-annotator agreement of 67.2% using Cohen’s Kappa coefficient. The annotators then discussed and agreed on general guidelines which were used to label the remaining tweets of the dataset. The resulting dataset has an inter-annotator agreement of 79.2% using Cohen’s Kappa statistic. The overall distribution, distributions by political party, and distributions per issue of the labeled dataset are presented in Table 2. Table 3 lists the frames that most frequently co-occured with each MF. As expected, frames concerning Morality and Sympathy are highly correlated with the Purity foundation, while Subversion is highly correlated with the Legal and Political frames. Labeling tweets presents several challenges. First, tweets are short and thus lack the context often necessary for choosing a moral viewpoint. Tweets are often ambiguous, e.g., a tweet may express care for people who are being harmed by a policy. Another major challenge was overcoming the political bias of the annotator. For example, if a tweet discusses opposing Planned Parenthood because it provides abortion services, the liberal annotator typically viewed this as Harm (i.e., hurting women by taking away services from them), while the conservative annotator tended to view this as Purity (i.e., all life is sacred and should be protected). To overcome this bias, annotators were given the political party of the politician who wrote the tweets and instructed to choose the moral foundation from the politician’s perspective. To further simplify the annotation process, all tweets belonging to one political party were labeled together, i.e., all Republican tweets were labeled and then all Democrat tweets were labeled. Finally, tweets present a compound problem, often expressing two thoughts which can further be contradictory. This results in one tweet having multiple moral foundations. Annotators chose a primary moral foundation whenever possible, but were allowed a secondary foundation if the tweet presented two differing thoughts. Several recurring themes continued to appear throughout the dataset including “thoughts and prayers” for victims of gun shooting events or rhetoric against the opposing political party. The annotators agreed to use the following moral foundation labels for these repeating topics as follows: (1) The Purity label is used for tweets that relate to 723 Morals OVERALL PARTY ISSUE REP DEM ABO ACA GUN IMM LGBTQ TER Care 524 156 368 37 123 215 33 34 113 Harm 355 151 204 26 64 141 19 34 101 Fairness 268 55 213 41 81 19 11 86 39 Cheating 82 37 45 14 27 11 10 9 13 Loyalty 303 63 240 28 29 128 36 38 58 Betrayal 53 25 28 10 4 9 6 3 22 Authority 192 62 130 24 44 50 38 10 34 Subversion 419 251 168 34 169 75 73 25 60 Purity 174 86 88 24 3 102 5 24 41 Degradation 66 34 32 5 0 31 0 4 31 Non-moral 334 198 136 17 143 28 47 7 96 Table 2: Distributions of Moral Foundations. Overall is across the entire dataset. Party is the Republican (REP) or Democrat (DEM) specific distributions. Issue lists the six issue-specific distributions (Abortion, ACA, Guns, Immigration, LGBTQ, Terrorism). MORAL FOUNDATION AND CO-OCCURING FRAMES Care: Capacity & Resources, Security & Defense, Health & Safety, Quality of Life, Public Sentiment, External Regulation & Reputation Harm: Economic, Crime & Punishment Fairness: Fairness & Equality Loyalty: Cultural Identity Subversion: Legality, Constitutionality, & Jurisdiction, Political Factors & Implications, Policy Description, Prescription, & Evaluation Purity: Morality & Ethics, Personal Sympathy & Support Non-moral: Factual, (Self) Promotion Table 3: Foundations and Co-occuring Frames. Cheating, Betrayal, Authority, and Degradation did not co-occur frequently with any frames. prayers or the fight against ISIL/ISIS. (2) Loyalty is for tweets that discuss “stand(ing) with” others, American values, troops, or allies, or reference a demographic that the politician belongs to, e.g. if the politician tweeting is a woman and she discusses an issue in terms of its effects on women. (3) At the time the dataset was collected, the President was Barack Obama and the Republican party controlled Congress. Therefore, any tweets specifically attacking Obama or Republicans (the controlling party) were labeled as Subversion. (4) Tweets discussing health or welfare were labeled as Care. (5) Tweets which discussed limiting or restricting laws or rights were labeled as Cheating. (6) Sarcastic attacks, typically against the opposing political party, were labeled as Degradation. 4 Feature Extraction for PSL Models For this work, we designed extraction models and PSL models that were capable of adapting to the dynamic language used on Twitter and predicting the moral foundation of a given tweet. Our approach uses weakly supervised extraction models, whose only initial supervision is a set of unigrams and the political party of the tweet’s author, to extract features for each PSL model. These features are represented as PSL predicates and combined into the probabilistic rules of each model, as shown in Table 4, which successively build upon the rules of the previous model. 4.1 Global Modeling Using PSL PSL is a declarative modeling language which can be used to specify weighted, first-order logic rules that are compiled into a hinge-loss Markov random field. This field defines a probability distribution over possible continuous value assignments to the random variables of the model (Bach et al., 2015) and is represented as: P(Y | X) = 1 Z exp − M X r=1 λrφr(Y , X) ! where Z is a normalization constant, λ is the weight vector, and φr(Y, X) = (max{lr(Y, X), 0})ρr is the hinge-loss potential specified by a linear function lr. The exponent ρr ∈1, 2 is optional. Each potential represents the instantiation of a rule, which takes the following form: λ1 : P1(x) ∧P2(x, y) →P3(y) λ2 : P1(x) ∧P4(x, y) →¬P3(y) P1, P2, P3, and P4 are predicates (e.g., party, issue, and frame) and x, y are variables. Each rule has a weight λ to reflect its importance to the model. Using concrete constants a, b (e.g., tweets) which instantiate the variables x, y, model atoms are mapped to continuous [0,1] assignments. 724 MOD. INFORMATION USED EXAMPLE OF PSL RULE M1 UNIGRAMS (MFD OR AR) UNIGRAMM (T, U) →MORAL(T, M) M2 M1 + PARTY UNIGRAMM (T, U) ∧PARTY(T, P) →MORAL(T, M) M3 M2 + ISSUE UNIGRAMM (T, U) ∧PARTY(T, P) ∧ISSUE(T, I) →MORAL(T, M) M4 M3 + PHRASE UNIGRAMM (T, U) ∧PARTY(T, P) ∧PHRASE(T, PH) →MORAL(T, M) M5 M4 + FRAME UNIGRAMM (T, U) ∧PHRASE(T, PH) ∧FRAME(T, F) →MORAL(T, M) M6 M5 + PARTY-BIGRAMS UNIGRAMM (T, U) ∧PARTY(T, P) ∧BIGRAMP (T, B) →MORAL(T, M) M7 M6 + PARTY-ISSUE-BIGRAMS UNIGRAMM (T, U) ∧PARTY(T, P) ∧BIGRAMP I(T, B) →MORAL(T, M) M8 M7 + PHRASE BIGRAMP I(T, B) ∧PHRASE(T, PH) →MORAL(T, M) M9 M8 + FRAME BIGRAMP I(T, B) ∧FRAME(T, F) →MORAL(T, M) M10 M9 + PARTY-TRIGRAMS UNIGRAMM (T, U) ∧PARTY(T, P) ∧TRIGRAMP (T, TG) →MORAL(T, M) M11 M10 + PARTY-ISSUE-TRIGRAMS UNIGRAMM (T, U) ∧PARTY(T, P) ∧TRIGRAMP I(T, TG) →MORAL(T, M) M12 M11 + PHRASE TRIGRAMP I(T, TG) ∧PHRASE(T, PH) →MORAL(T, M) M13 M12 + FRAME TRIGRAMP I(T, TG) ∧FRAME(T, F) →MORAL(T, M) Table 4: Examples of PSL Moral Model Rules Using Gold Standard Frames. For these rules, the FRAME predicate is initialized with the known frame labels of the tweet. Each model builds successively on the rules of the previous model. M2: UNIGRAMS + PARTY UNIGRAMM (T, U) ∧PARTY(T, P) ∧FRAME(T, F) → MORAL(T, M) UNIGRAMM (T, U) ∧PARTY(T, P) ∧MORAL(T, M) → FRAME(T, F) M13: ALL FEATURES TRIGRAMP I(T, TG) ∧PHRASE(T, PH) ∧FRAME(T, F) →MORAL(T, M) TRIGRAMP I(T, TG) ∧UNIGRAMM (T, U) ∧MORAL(T, M) →FRAME(T, F) Table 5: Examples of PSL Joint Moral and Frame Model Rules. For these models, the FRAME predicate is not initialized with known values, but is predicted jointly with the MORAL predicate. 4.2 Feature Extraction Models For each aspect of the tweets that composes the PSL models, scripts are written to first identify and then extract the correct information from the tweets. Once extracted, this information is formatted into PSL predicate notation and input to the PSL models. Table 4 presents the information that composes each PSL model, as well as an example of how rules in the PSL model are constructed. Language: Works studying the Moral Foundations Theory typically assign a foundation to a body of text based on a majority match of the words in the text to the Moral Foundations Dictionary (MFD), a predefined list of unigrams associated with each foundation. These unigrams capture the conceptual idea behind each foundation. Annotators noted, however, that when choosing a foundation they typically used a small phrase or the entire tweet, not a single unigram. Based on this, we compiled all of the annotators’ phrases per foundation into a unique set to create a new list of unigrams for each foundation. These unigrams are referred to as “Annotator’s Rationale (AR)” throughout the remainder of this paper. The PSL predicate UNIGRAMM(T, U) is used to input any unigram U from tweet T that matches the M list of unigrams (either from the MFD or AR lists) into the PSL models. An example of a rule using this predicate is shown in the first row of Table 4. During annotation, we observed that often a tweet has only one match to a unigram, if any, and therefore a majority count approach may fail. Further, as shown in Figure 2, many tweets have one unigram that matches one foundation and another unigram that matches a different foundation. In such cases, the correct foundation cannot be determined from unigram counts alone. Based on these observations and the annotators’ preference for using phrases, we incorporate the most frequent bigrams and trigrams for each political party (BIGRAMP (T, B) and TRIGRAMP (T, TG)) and for each party on each issue (BIGRAMP I(T, B) and TRIGRAMP I(T, TG)). These top 20 bigrams and trigrams contribute to a more accurate prediction than unigrams alone (Johnson et al., 2017). Ideological Information: Previous works have shown a strong correlation between ideology and the moral foundations (Haidt and Graham, 2007), as well as between ideology and policy issues (Boydstun et al., 2014). Annotators were able to agree on labels when instructed to label from the ideological point of view of the tweet’s author, even if it opposed their own views. Based on these 725 positive correlations, we incorporate both the issue of the tweet (ISSUE(T, I)) and the political party of the author of the tweet (PARTY(T, P)) into the PSL models. Examples of how this information is represented in the PSL models are shown in rows two and three of Table 4. Abstract Phrases: As described previously, annotators reported that phrases were more useful than unigrams in determining the moral foundation of the tweet. Due to the dynamic nature of language and trending issues on Twitter, it is impracticable to construct a list of all possible phrases one can expect to appear in tweets. However, because politicians are known for sticking to certain talking points, these phrases can be abstracted into higher-level phrases that are more stable and thus easier to identify and extract. For example, a tweet discussing “President Obama’s signing a bill” has two possible concrete phrases: President Obama’s signing and signing a bill. Each phrase falls under two possible abstractions: political maneuvering (Obama’s actions) and mentions legislation (signing of a bill). In this paper we use the following high-level abstractions: legislation or voting, rights and equality, emotion, sources of danger or harm, positive benefits or effects, solidarity, political maneuvering, protection and prevention, American values or traditions, religion, and promotion. For example, if a tweet mentions “civil rights” or “equal pay”, then these phrases indicate that the rights and equality abstraction is being used to express morality. Some of these abstractions correlate with the corresponding MF or frame, e.g., the religion abstraction is highly correlated with the Purity foundation and political maneuvering is correlated with the Political Factors & Implications Frame. To match phrases in tweets to these abstractions, we use the embedding-based model of Lee et al. (2017). This phrase similarity model was trained on the Paraphrase Database (PPDB) (Ganitkevitch et al., 2013) and incorporates a Convolutional Neural Network (CNN) to capture sentence structures. This model generates the embeddings of our abstract phrases and computes the cosine similarities between phrases and tweets as the scores. The input tweets and phrases are represented as the average word embeddings in the input layer, which are then projected into a convolutional layer, a max-pooling layer, and finally two fully-connected layers. The embeddings are thus represented in the final layer. The learning objective of this model is: min Wc,Ww  X <x1,x2>∈X max(0, δ −cos(g(x1), g(x2)) + cos(g(x1), g(t1))) +max(0, δ −cos(g(x1), g(x2))) + cos(g(x2), g(t2))  +λc||Wc||2 + λw||Winit −Ww||2, where X is all the positive input pairs, δ is the margin, g(·) represents the network, λc and λw are the weights for L2-regularization, Wc is the network parameters, Ww is the word embeddings, Winit is the initial word embeddings, and t1 and t2 are negative examples that are randomly selected. All tweet-phrase pairs with a cosine similarity over a given threshold are used as input to the PSL model via the predicate PHRASE(T, PH), which indicates that tweet T contains a phrase that is similar to an abstracted phrase (PH). 3 Rows four, eight, and twelve of Table 4 show examples of the phrase rules as used in our modeling procedure. Nuanced Framing: Framing is a political strategy in which politicians carefully word their statements in order to bias public opinion towards their stance on an issue. This technique is a finegrained view of how issues are expressed. Frames are associated with issue, political party, and ideologies. For example, if a politician emphasizes the economic burden a new bill would place on the public, then they are using the Economic frame. Different from this, if they emphasize how people’s lives will improve because of this bill, then they are using the Quality of Life frame. In this work, we explore frames in two settings: (1) where the actual frames of tweets are known and used to predict the moral foundation of the tweets and (2) when the frames are unknown and predicted jointly with the moral foundations. Using the Congressional Tweets Dataset as the true labels for 17 policy frames, this information is input to the PSL models using the FRAME(T, F) predicate as shown in Table 4. Conversely, the 3A threshold score of 0.45 provided the most accurate matches while minimizing noise. 726 same predicate can be used as a joint prediction target predicate, with no initialization, as shown in Table 5. 5 Experimental Results In this section, we present an analysis of the results of our modeling approach. Table 6 summarizes our overall results and compares the traditional BoW SVM classifier4 to several variations of our model. We provide an in-depth analysis, broken down by the different types of moral foundations, in Tables 7 and 8. We also study the relationship between moral foundations, policy framing, and political ideology. Table 9 describes the results of a joint model for predicting moral foundations and policy frames. Finally, in Section 6 we discuss how moral foundations can be used for the downstream prediction of political party affiliation. MODEL MFD AR SVM BOW 18.70 — PSL BOW 21.88 — MAJORITY VOTE 12.50 10.86 M1 (UNIGRAMS) 7.17 8.68 M3 (+ POLITICAL INFO) 22.01 30.45 M5 (+ FRAMES) 28.94 37.44 M9 (+ BIGRAMS) 67.93 66.50 M13 (ALL FEATURES) 72.49 69.38 Table 6: Overview of Macro-weighted Average F1 Scores of SVM and PSL Models. The top portion of the table shows the results of the three baselines. The bottom portion shows a subset of the PSL models (parentheses indicate features added onto the previous models). Evaluation Metrics: Since each tweet can have more than one moral foundation, our prediction task is a multilabel classification task. The precision of a multilabel model is the ratio of how many predicted labels are correct: Precision = 1 T T X t=1 |Yt ∩h(xt)| |h(xt)| (1) The recall of this model is the ratio of how many of the actual labels were predicted: Recall = 1 T T X t=1 |Yt ∩h(xt)| |Yt| (2) 4For this work, we used the SVM implementation provided by scikit-learn. In both formulas, T is the number of tweets, Yt is the true label for tweet t, xt is a tweet example, and h(xt) are the predicted labels for that tweet. The F1 score is computed as the harmonic mean of the precision and recall. Additionally, the last lines of Tables 7 and 8 provide the macro-weighted average F1 score over all moral foundations. Analysis of Supervised Experiments: We conducted supervised experiments using five-fold cross validation with randomly chosen splits. Table 6 shows an overview of the average results of our supervised experiments for five of the PSL models. The first column lists the SVM or PSL model. The second column presents the results of a given model when using the MFD as the source of the unigrams for the initial model (M1). The final column shows the results when the AR unigrams are used as the initial source of supervision. The first two rows show the results of predicting the morals present in tweets using a bag-of-words (BoW) approach. Both the SVM and PSL models perform poorly due to the eleven predictive classes and noisy input features. The third row shows the results when taking a majority vote over the presence of MFD unigrams, similar to previous works. This approach is simpler and less noisy than M1, the PSL model closest to this approach. The last five lines of this table also show the overall trends of the full results shown in Tables 7 and 8. As can be seen in all three tables, as we add more information with each PSL model, the overall results continue to improve, with the final model (M13) achieving the highest F1 score for both sources of unigrams. An interesting trend to note is that the AR unigrams based models result in better average performance for most of the models until M9. Models M9 and above incorporate the most powerful features: bigrams and trigrams with phrases and frames. This suggests that the AR unigrams, designed specifically for the political Twitter domain, are more useful than the MFD unigrams, when only unigrams are available. Conversely, the MFD unigrams are designed to conceptually capture morality, and therefore have weaker performance in the unigram-based models, but achieve higher performance when combined with the more powerful features of the higher models. For all models, incorporating phrases and frames results in a more accurate prediction than when using unigrams alone. 727 Moral Fdn. RESULTS OF NON-JOINT PSL MODEL PREDICTIONS M1 M2 M3 M4 M5 M6 M7 M8 M9 M10 M11 M12 M13 CARE 16.61 52.51 43.34 53.24 53.38 53.59 55.64 62.40 66.00 66.48 67.32 67.59 67.78 HARM 12.57 47.62 42.58 50.39 57.24 55.29 60.06 67.06 71.58 71.58 72.39 73.68 73.54 FAIRNESS 24.68 52.22 45.16 50.22 51.50 50.86 61.54 71.13 74.00 74.50 75.32 75.48 75.48 CHEATING 0.00 0.00 0.00 0.00 0.00 0.00 0.00 21.05 51.85 51.85 56.14 60.00 60.00 LOYALTY 18.29 44.53 41.49 43.87 43.59 44.22 47.65 59.15 62.82 63.75 63.75 63.95 64.20 BETRAYAL 0.00 0.00 10.00 20.00 20.00 20.00 18.18 34.78 66.67 66.67 68.42 70.00 70.00 AUTHORITY 0.00 30.93 30.19 33.10 35.53 33.96 45.52 55.29 62.50 65.91 67.78 69.23 69.61 SUBVERSION 3.77 32.69 13.39 25.90 24.66 42.36 59.29 72.66 77.29 78.08 78.41 79.22 79.61 PURITY 0.00 8.89 4.88 9.88 9.76 56.12 63.86 70.86 72.13 74.16 76.09 79.14 80.41 DEGRADATION 2.99 15.38 9.52 10.00 10.00 8.00 20.69 52.94 61.54 61.54 68.09 73.47 73.47 NON-MORAL 0.00 0.00 1.60 3.51 12.70 12.31 54.55 71.14 80.90 81.82 82.35 82.54 83.33 AVERAGE 7.17 25.89 22.01 27.28 28.94 34.25 44.27 58.04 67.93 68.76 70.55 72.21 72.49 Table 7: F1 Scores of PSL Models Using the Moral Foundations Dictionary (MFD). The highest prediction per moral foundation is marked in bold. Moral Fdn. RESULTS OF NON-JOINT PSL MODEL PREDICTIONS M1 M2 M3 M4 M5 M6 M7 M8 M9 M10 M11 M12 M13 CARE 7.29 29.72 30.51 30.86 30.62 35.66 46.41 54.17 61.77 62.16 62.91 64.79 64.91 HARM 2.25 8.89 19.31 21.89 26.18 26.09 37.28 52.40 62.18 62.18 63.74 64.67 64.86 FAIRNESS 9.15 26.43 27.12 28.70 30.43 31.92 53.56 69.88 72.52 72.52 74.26 74.63 74.63 CHEATING 4.76 13.33 25.45 25.45 38.71 39.34 40.68 51.61 62.16 62.16 64.94 65.82 65.82 LOYALTY 2.61 19.66 23.85 25.10 27.31 29.57 38.06 47.73 54.30 55.22 55.59 57.34 57.91 BETRAYAL 0.00 0.00 0.00 6.25 12.12 11.76 18.18 28.57 60.47 60.47 62.22 65.22 65.22 AUTHORITY 13.59 40.19 48.40 51.82 56.25 56.14 57.04 63.30 66.45 66.67 67.32 67.53 67.53 SUBVERSION 4.79 40.69 42.34 43.21 43.93 44.03 47.20 55.12 56.47 56.47 57.07 57.53 57.65 PURITY 5.62 13.64 19.78 23.16 30.00 60.38 69.66 76.67 79.35 79.35 80.21 81.82 82.52 DEGRADATION 16.66 31.37 37.74 44.83 51.61 51.61 57.14 68.75 73.53 73.53 77.33 78.95 78.95 NON-MORAL 28.78 52.99 60.48 61.33 64.72 66.00 73.62 79.41 82.25 82.25 82.55 82.78 83.20 AVERAGE 8.68 25.17 30.45 32.96 37.44 41.14 48.98 58.87 66.50 66.63 68.01 69.19 69.38 Table 8: F1 Scores of PSL Models Using Annotator’s Rationale (AR). The highest prediction per moral foundation is marked in bold. Analysis of Joint Experiments: In addition to studying the effects of each feature on the models’ ability to predict moral foundations, we also explored jointly predicting both policy frames and moral foundations. These tasks are highly related as shown by the large increase in score between the baseline and skyline measurements in Table 9 once frames are incorporated into the models. Both moral foundations and frame classification are challenging multilabel classification tasks, the former using 11 possible foundations and the latter consisting of 17 possible frames. Furthermore, joint learning problems are harder to learn due to a larger numbers of parameters, which in turn also affects learning and inference. Table 9 shows the macro-weighted average F1 scores for three different models. The BASELINE model shows the results of predicting only the MORAL of the tweet using the non-joint model M13, which uses all features with frames initialized. The JOINT model is designed to predict both the moral foundation and frame of a tweet simultaneously (as shown in Table 5), with no frame initialization. Finally, the SKYLINE model is M13 with all features, where the frames are initialized with their known values. The joint model using AR unigrams outperforms the baseline, showing that there is some benefit to modeling moral foundations and frames together, as well as using domain-specific unigrams. However, it is unable to beat the MFDbased unigrams model. This is likely due to the large amount of noise introduced by incorrect frame predictions into the joint model. As expected, the joint model does not outperform the skyline model which is able to use the known values of the frames in order to accurately classify the moral foundations associated with the tweets. Finally, the predictions for the frames in the joint model were quite low, going from an average F1 score of 26.09 in M1 to an average F1 score of 27.99 in M13. This likely has two causes: (1) frame prediction is a challenging 17-label classification task, with a random baseline of 6% (which 728 our approach is able to exceed) and (2) the lower performance is because the frames are predicted with no initialization. In previous works, the frame prediction models are initialized with a set of unigrams expected to occur for each frame. Different from this approach, the only information our models provide to the frames are political party, issue, associated bigrams and trigrams, and the predicted values for the moral foundations from using this information. The F1 score of 27.99 with such minimal initialization indicates that there is indeed a relationship between policy frames and the moral foundations expressed in tweets worth exploring in future work. PSL MODEL MFD AR BASELINE 55.49 55.88 JOINT 51.22 58.75 SKYLINE 72.49 69.38 Table 9: Overview of Macro-weighted Average F1 Scores of Joint PSL Model M13. BASELINE is the MORAL prediction result. JOINT is the result of jointly predicting the MORAL and uninitialized FRAME predicates. SKYLINE shows the results when using all features with initialized frames. 6 Qualitative Results Previous works (Makazhanov and Rafiei, 2013; Preot¸iuc-Pietro et al., 2017) have shown the usefulness of moral foundations for the prediction of political party preference and the political ideologies of Twitter users. The moral foundation information used in these tasks is typically represented as word-level features extracted from the MFD. Unfortunately, these dictionary-based features are often too noisy to contribute to highly accurate predictions. Recall the example tweets shown in Figures 1 and 2. Both figures are examples of tweets that are mislabeled by the traditional MFD-based approach, but correctly labeled using PSL Model M13. Using the MFD, Figure 1 is labeled as Authority due to “permit”, the only matching unigram, while Figure 2 is incorrectly labeled as Care, even though there is one matching unigram for Harm and one for Care. To further demonstrate this point we compare the dictionary features to features extracted from the MORAL predictions of our PSL model. Table 10 shows the results of using the different feature sets for the prediction of political affiliation of the author of a given tweet. All three models use moral information for prediction, but this information is represented differently in each of the models. The MFD model (line 1) uses the MFD unigrams to directly predict the political party of the author. The PSL model (line 2) uses the MF prediction made by the best performing model (M13) as features. Finally, the GOLD model (line 3) uses the actual MF annotations. The difference in performance between the GOLD and MFD results shows that directly mapping the expected MFD unigrams to politicians’ tweets is not informative enough for party affiliation prediction. However, by using abstract representations of language, the PSL model is able to achieve results closer to that which can be attained when using the actual annotations as features. PSL MODEL REP DEM MFD 48.72 51.28 PSL 61.25 66.92 GOLD 68.57 71.43 Table 10: Accuracy of Author Political Party Prediction. REP represents Republican and DEM represents Democrat. 7 Conclusion Moral foundations and policy frames are employed as political strategies by politicians to garner support from the public. Politicians carefully word their statements to express their moral and social positions on issues, while maximizing their base’s response to their message. In this paper we present PSL models for the classification of moral foundations expressed in political discourse on the microblog, Twitter. We show the benefits and drawbacks of traditionally used MFD unigrams and domain-specific unigrams for initialization of the models. We also provide an initial approach to the joint modeling of frames and moral foundations. In future works, we will exploit the interesting connections between moral foundations and frames for the analysis of more detailed ideological leanings and stance prediction. Acknowledgments We thank Lyle Ungar for his insight and discussion which inspired this work. We also thank the anonymous reviewers for their thoughtful comments and suggestions. This research was partly funded by a Google Focused Research Award. 729 References Stephen H Bach, Matthias Broecheler, Bert Huang, and Lise Getoor. 2015. Hinge-loss markov random fields and probabilistic soft logic. arXiv preprint arXiv:1505.04406. Stephen H. Bach, Bert Huang, Ben London, and Lise Getoor. 2013. Hinge-loss Markov random fields: Convex inference for structured prediction. In Proc. of UAI. Akshat Bakliwal, Jennifer Foster, Jennifer van der Puil, Ron O’Brien, Lamia Tounsi, and Mark Hughes. 2013. Sentiment analysis of political tweets: Towards an accurate classifier. In Proc. of ACL. David Bamman and Noah A Smith. 2015. Open extraction of fine-grained political statements. In Proc. of EMNLP. Eric Baumer, Elisha Elovic, Ying Qin, Francesca Polletta, and Geri Gay. 2015. Testing and comparing computational approaches for identifying the language of framing in political news. In In Proc. of NAACL. Adam Bermingham and Alan F Smeaton. 2011. On using twitter to monitor political sentiment and predict election results. Amber Boydstun, Dallas Card, Justin H. Gross, Philip Resnik, and Noah A. Smith. 2014. Tracking the development of media frames within and across policy issues. Lauren M. Burch, Evan L. Frederick, and Ann Pegoraro. 2015. Kissing in the carnage: An examination of framing on twitter during the vancouver riots. Journal of Broadcasting & Electronic Media, 59(3):399–415. Dallas Card, Amber E. Boydstun, Justin H. Gross, Philip Resnik, and Noah A. Smith. 2015. The media frames corpus: Annotations of frames across issues. In Proc. of ACL. Sarah Djemili, Julien Longhi, Claudia Marinica, Dimitris Kotzinos, and Georges-Elia Sarfati. 2014. What does twitter have to say about ideology? In NLP 4 CMC. Dean Fulgoni, Jordan Carpenter, Lyle Ungar, and Daniel Preotiuc-Pietro. 2016. An empirical exploration of moral foundations theory in partisan news sources. In Proc. of LREC. Juri Ganitkevitch, Benjamin Van Durme, and Chris Callison-Burch. 2013. The paraphrase database. In Proc. of NAACL-HLT. Justin Garten, Reihane Boghrati, Joe Hoover, Kate M Johnson, and Morteza Dehghani. 2016. Morality between the lines: Detecting moral sentiment in text. In IJCAI workshops. Jesse Graham, Jonathan Haidt, and Brian A Nosek. 2009. Liberals and conservatives rely on different sets of moral foundations. Journal of personality and social psychology, 96(5):1029. Jesse Graham, Brian A Nosek, and Jonathan Haidt. 2012. The moral stereotypes of liberals and conservatives: Exaggeration of differences across the political spectrum. PloS one, 7(12):e50092. Jonathan Haidt and Jesse Graham. 2007. When morality opposes justice: Conservatives have moral intuitions that liberals may not recognize. Social Justice Research, 20(1):98–116. Jonathan Haidt and Craig Joseph. 2004. Intuitive ethics: How innately prepared intuitions generate culturally variable virtues. Daedalus, 133(4):55–66. Summer Harlow and Thomas Johnson. 2011. The arab spring— overthrowing the protest paradigm? how the new york times, global voices and twitter covered the egyptian revolution. International Journal of Communication, 5(0). Iyyer, Enns, Boyd-Graber, and Resnik. 2014. Political ideology detection using recursive neural networks. In Proc. of ACL. S. Mo Jang and P. Sol Hart. 2015. Polarized frames on ”climate change” and ”global warming” across countries and states: Evidence from twitter big data. Global Environmental Change, 32:11–17. Kristen Johnson and Dan Goldwasser. 2016. “all i know about politics is what i read in twitter”: Weakly supervised models for extracting politicians stances from twitter. In Proceedings of COLING. Kristen Johnson, Di Jin, and Dan Goldwasser. 2017. Leveraging behavioral and social information for weakly supervised collective classification of political discourse on twitter. In Proc. of ACL. I-Ta Lee, Mahak Goindani, Chang Li, Di Jin, Kristen Johnson, Xiao Zhang, Maria Pacheco, and Dan Goldwasser. 2017. Purduenlp at semeval-2017 task 1: Predicting semantic textual similarity with paraphrase and event embeddings. In Proc. of SemEval. Ying Lin, Joe Hoover, Morteza Dehghani, Marlon Mooijman, and Heng Ji. 2017. Acquiring background knowledge to improve moral value prediction. arXiv preprint arXiv:1709.05467. Aibek Makazhanov and Davood Rafiei. 2013. Predicting political preference of twitter users. In Proceedings of the 2013 IEEE/ACM International Conference on Advances in Social Networks Analysis and Mining, ASONAM ’13, pages 298–305, New York, NY, USA. ACM. Sharon Meraz and Zizi Papacharissi. 2013. Networked gatekeeping and networked framing on #egypt. The International Journal of Press/Politics, 18(2):138– 166. 730 Brendan O’Connor, Ramnath Balasubramanyan, Bryan R Routledge, and Noah A Smith. 2010. From tweets to polls: Linking text sentiment to public opinion time series. In Proc. of ICWSM. James W Pennebaker, Martha E Francis, and Roger J Booth. 2001. Linguistic inquiry and word count: Liwc 2001. Mahway: Lawrence Erlbaum Associates, 71(2001):2001. Ferran Pla and Llu´ıs F Hurtado. 2014. Political tendency identification in twitter using sentiment analysis techniques. In Proc. of COLING. Daniel Preot¸iuc-Pietro, Ye Liu, Daniel Hopkins, and Lyle Ungar. 2017. Beyond binary labels: political ideology prediction of twitter users. In Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), volume 1, pages 729–740. Sim, Acree, Gross, and Smith. 2013. Measuring ideological proportions in political speeches. In Proc. of EMNLP. Yla R Tausczik and James W Pennebaker. 2010. The psychological meaning of words: Liwc and computerized text analysis methods. Journal of language and social psychology, 29(1):24–54. Oren Tsur, Dan Calacci, and David Lazer. 2015. A frame of mind: Using statistical models for detection of framing and agenda setting campaigns. In Proc. of ACL. Andranik Tumasjan, Timm Oliver Sprenger, Philipp G Sandner, and Isabell M Welpe. 2010. Predicting elections with twitter: What 140 characters reveal about political sentiment. In ICWSM. Svitlana Volkova, Kyle Shaffer, Jin Yea Jang, and Nathan Hodas. 2017. Separating facts from fiction: Linguistic models to classify suspicious and trusted news posts on twitter. In Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (Volume 2: Short Papers), volume 2, pages 647–653.
2018
67
Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Long Papers), pages 731–742 Melbourne, Australia, July 15 - 20, 2018. c⃝2018 Association for Computational Linguistics 731 Coarse-to-Fine Decoding for Neural Semantic Parsing Li Dong and Mirella Lapata Institute for Language, Cognition and Computation School of Informatics, University of Edinburgh 10 Crichton Street, Edinburgh EH8 9AB [email protected] [email protected] Abstract Semantic parsing aims at mapping natural language utterances into structured meaning representations. In this work, we propose a structure-aware neural architecture which decomposes the semantic parsing process into two stages. Given an input utterance, we first generate a rough sketch of its meaning, where low-level information (such as variable names and arguments) is glossed over. Then, we fill in missing details by taking into account the natural language input and the sketch itself. Experimental results on four datasets characteristic of different domains and meaning representations show that our approach consistently improves performance, achieving competitive results despite the use of relatively simple decoders. 1 Introduction Semantic parsing maps natural language utterances onto machine interpretable meaning representations (e.g., executable queries or logical forms). The successful application of recurrent neural networks to a variety of NLP tasks (Bahdanau et al., 2015; Vinyals et al., 2015) has provided strong impetus to treat semantic parsing as a sequence-to-sequence problem (Jia and Liang, 2016; Dong and Lapata, 2016; Ling et al., 2016). The fact that meaning representations are typically structured objects has prompted efforts to develop neural architectures which explicitly account for their structure. Examples include tree decoders (Dong and Lapata, 2016; Alvarez-Melis and Jaakkola, 2017), decoders constrained by a grammar model (Xiao et al., 2016; Yin and Neubig, 2017; Krishnamurthy et al., 2017), or modular decoders which use syntax to dynamically compose various submodels (Rabinovich et al., 2017). In this work, we propose to decompose the decoding process into two stages. The first decoder focuses on predicting a rough sketch of the meaning representation, which omits low-level details, such as arguments and variable names. Example sketches for various meaning representations are shown in Table 1. Then, a second decoder fills in missing details by conditioning on the natural language input and the sketch itself. Specifically, the sketch constrains the generation process and is encoded into vectors to guide decoding. We argue that there are at least three advantages to the proposed approach. Firstly, the decomposition disentangles high-level from low-level semantic information, which enables the decoders to model meaning at different levels of granularity. As shown in Table 1, sketches are more compact and as a result easier to generate compared to decoding the entire meaning structure in one go. Secondly, the model can explicitly share knowledge of coarse structures for the examples that have the same sketch (i.e., basic meaning), even though their actual meaning representations are different (e.g., due to different details). Thirdly, after generating the sketch, the decoder knows what the basic meaning of the utterance looks like, and the model can use it as global context to improve the prediction of the final details. Our framework is flexible and not restricted to specific tasks or any particular model. We conduct experiments on four datasets representative of various semantic parsing tasks ranging from logical form parsing, to code generation, and SQL query generation. We adapt our architecture to these tasks and present several ways to obtain sketches from their respective meaning representations. Experimental results show that our framework achieves competitive performance compared 732 Dataset Length Example GEO 7.6 13.7 6.9 x : which state has the most rivers running through it? y : (argmax $0 (state:t $0) (count $1 (and (river:t $1) (loc:t $1 $0)))) a : (argmax#1 state:t@1 (count#1 (and river:t@1 loc:t@2 ) ) ) ATIS 11.1 21.1 9.2 x : all flights from dallas before 10am y : (lambda $0 e (and (flight $0) (from $0 dallas:ci) (< (departure time $0) 1000:ti))) a : (lambda#2 (and flight@1 from@2 (< departure time@1 ? ) ) ) DJANGO 14.4 8.7 8.0 x : if length of bits is lesser than integer 3 or second element of bits is not equal to string ’as’ , y : if len(bits) < 3 or bits[1] != ’as’: a : if len ( NAME ) < NUMBER or NAME [ NUMBER ] != STRING : WIKISQL 17.9 13.3 13.0 2.7 Table schema: ∥Pianist∥Conductor∥Record Company∥Year of Recording∥Format∥ x : What record company did conductor Mikhail Snitko record for after 1996? y : SELECT Record Company WHERE (Year of Recording > 1996) AND (Conductor = Mikhail Snitko) a : WHERE > AND = Table 1: Examples of natural language expressions x, their meaning representations y, and meaning sketches a. The average number of tokens is shown in the second column. with previous systems, despite employing relatively simple sequence decoders. 2 Related Work Various models have been proposed over the years to learn semantic parsers from natural language expressions paired with their meaning representations (Tang and Mooney, 2000; Ge and Mooney, 2005; Zettlemoyer and Collins, 2007; Wong and Mooney, 2007; Lu et al., 2008; Kwiatkowski et al., 2011; Andreas et al., 2013; Zhao and Huang, 2015). These systems typically learn lexicalized mapping rules and scoring models to construct a meaning representation for a given input. More recently, neural sequence-to-sequence models have been applied to semantic parsing with promising results (Dong and Lapata, 2016; Jia and Liang, 2016; Ling et al., 2016), eschewing the need for extensive feature engineering. Several ideas have been explored to enhance the performance of these models such as data augmentation (Koˇcisk´y et al., 2016; Jia and Liang, 2016), transfer learning (Fan et al., 2017), sharing parameters for multiple languages or meaning representations (Susanto and Lu, 2017; Herzig and Berant, 2017), and utilizing user feedback signals (Iyer et al., 2017). There are also efforts to develop structured decoders that make use of the syntax of meaning representations. Dong and Lapata (2016) and Alvarez-Melis and Jaakkola (2017) develop models which generate tree structures in a topdown fashion. Xiao et al. (2016) and Krishnamurthy et al. (2017) employ the grammar to constrain the decoding process. Cheng et al. (2017) use a transition system to generate variable-free queries. Yin and Neubig (2017) design a grammar model for the generation of abstract syntax trees (Aho et al., 2007) in depth-first, left-to-right order. Rabinovich et al. (2017) propose a modular decoder whose submodels are dynamically composed according to the generated tree structure. Our own work also aims to model the structure of meaning representations more faithfully. The flexibility of our approach enables us to easily apply sketches to different types of meaning representations, e.g., trees or other structured objects. Coarse-to-fine methods have been popular in the NLP literature, and are perhaps best known for syntactic parsing (Charniak et al., 2006; Petrov, 2011). Artzi and Zettlemoyer (2013) and Zhang et al. (2017) use coarse lexical entries or macro grammars to reduce the search space of semantic parsers. Compared with coarse-to-fine inference for lexical induction, sketches in our case are abstractions of the final meaning representation. The idea of using sketches as intermediate representations has also been explored in the field of program synthesis (Solar-Lezama, 2008; Zhang and Sun, 2013; Feng et al., 2017). Yaghmazadeh et al. (2017) use SEMPRE (Berant et al., 2013) to map a sentence into SQL sketches which are completed using program synthesis techniques and iteratively repaired if they are faulty. 3 Problem Formulation Our goal is to learn semantic parsers from instances of natural language expressions paired with their structured meaning representations. 733 all flights before ti0 ࢋଵ ࢋଶ ࢋସ ࢋଷ ࢊଷ (and flight@1 ࢊସ flight@1 (< ࢊହ (< departure _time@1 ࢊ଺ departure _time@1 ? ࢊ଻ ? ) ࢊଵ <s> (lambda#2 ࢊଶ (lambda#2 (and ࢊ଼ ) ) ࢊଽ ) ) ࢎଵ <s> (lambda ࢎଶ $0 ࢜ଷ ࢜ସ ࢜ହ ࢜଺ ࢜଻ ࢜ଵ ࢜ଶ ଼࢜ ࢜ଽ ࢊଵ଴ ) </s> ࢎହ (flight ࢎ଺ $0 ࢎ଻ $0 ) ࢎ଼ ) (< ࢎଽ (departure _time ࢎଵ଴ $0 ࢎଵଵ $0 ) ࢎଵଶ ) ti0 ࢎଵଷ ti0 ) ࢎଵସ ) ݌(ܽ|ݔ) ࢎଷ $0 e ࢎସ e (and ࢎଵହ ) ࢎଵ଺ </s> Sketch-Guided Output Decoding Sketch Encoding Sketch Decoding Input Encoding ݌(ݕ|ݔ, ܽ) Encoder units Decoder units Figure 1: We first generate the meaning sketch a for natural language input x. Then, a fine meaning decoder fills in the missing details (shown in red) of meaning representation y. The coarse structure a is used to guide and constrain the output decoding. Let x = x1 · · · x|x| denote a natural language expression, and y = y1 · · · y|y| its meaning representation. We wish to estimate p (y|x), the conditional probability of meaning representation y given input x. We decompose p (y|x) into a twostage generation process: p (y|x) = p (y|x, a) p (a|x) (1) where a = a1 · · · a|a| is an abstract sketch representing the meaning of y. We defer detailed description of how sketches are extracted to Section 4. Suffice it to say that the extraction amounts to stripping off arguments and variable names in logical forms, schema specific information in SQL queries, and substituting tokens with types in source code (see Table 1). As shown in Figure 1, we first predict sketch a for input x, and then fill in missing details to generate the final meaning representation y by conditioning on both x and a. The sketch is encoded into vectors which in turn guide and constrain the decoding of y. We view the input expression x, the meaning representation y, and its sketch a as sequences. The generation probabilities are factorized as: p (a|x) = |a|  t=1 p (at|a<t, x) (2) p (y|x, a) = |y|  t=1 p (yt|y<t, x, a) (3) where a<t = a1 · · · at−1, and y<t = y1 · · · yt−1. In the following, we will explain how p (a|x) and p (y|x, a) are estimated. 3.1 Sketch Generation An encoder is used to encode the natural language input x into vector representations. Then, a decoder learns to compute p (a|x) and generate the sketch a conditioned on the encoding vectors. Input Encoder Every input word is mapped to a vector via xt = Wxo (xt), where Wx ∈ Rn×|Vx| is an embedding matrix, |Vx| is the vocabulary size, and o (xt) a one-hot vector. We use a bi-directional recurrent neural network with long short-term memory units (LSTM, Hochreiter and Schmidhuber 1997) as the input encoder. The encoder recursively computes the hidden vectors at the t-th time step via: −→e t = fLSTM −→e t−1, xt  , t = 1, · · · , |x| (4) ←−e t = fLSTM ←−e t+1, xt  , t = |x|, · · · , 1 (5) et = [−→e t, ←−e t] (6) where [·, ·] denotes vector concatenation, et ∈Rn, and fLSTM is the LSTM function. Coarse Meaning Decoder The decoder’s hidden vector at the t-th time step is computed by dt = fLSTM (dt−1, at−1), where at−1 ∈Rn is the embedding of the previously predicted token. The hidden states of the first time step in the decoder are initialized by the concatenated encoding vectors d0 = [−→e |x|, ←−e 1]. Additionally, we use an attention mechanism (Luong et al., 2015) to learn soft alignments. We compute the attention score for the current time step t of the decoder, with the k-th hidden state in the encoder as: st,k = exp{dt · ek}/Zt (7) 734 where Zt = |x| j=1 exp{dt · ej} is a normalization term. Then we compute p (at|a<t, x) via: ed t = |x|  k=1 st,kek (8) datt t = tanh  W1dt + W2ed t  (9) p (at|a<t, x) = softmaxat  Wodatt t + bo  (10) where W1, W2 ∈Rn×n, Wo ∈R|Va|×n, and bo ∈R|Va| are parameters. Generation terminates once an end-of-sequence token “</s>” is emitted. 3.2 Meaning Representation Generation Meaning representations are predicted by conditioning on the input x and the generated sketch a. The model uses the encoder-decoder architecture to compute p (y|x, a), and decorates the sketch a with details to generate the final output. Sketch Encoder As shown in Figure 1, a bidirectional LSTM encoder maps the sketch sequence a into vectors {vk}|a| k=1 as in Equation (6), where vk denotes the vector of the k-th time step. Fine Meaning Decoder The final decoder is based on recurrent neural networks with an attention mechanism, and shares the input encoder described in Section 3.1. The decoder’s hidden states {ht}|y| t=1 are computed via: it = vk yt−1 is determined by ak yt−1 otherwise (11) ht = fLSTM (ht−1, it) where h0 = [−→e |x|, ←−e 1], and yt−1 is the embedding of the previously predicted token. Apart from using the embeddings of previous tokens, the decoder is also fed with {vk}|a| k=1. If yt−1 is determined by ak in the sketch (i.e., there is a one-toone alignment between yt−1 and ak), we use the corresponding token’s vector vk as input to the next time step. The sketch constrains the decoding output. If the output token yt is already in the sketch, we force yt to conform to the sketch. In some cases, sketch tokens will indicate what information is missing (e.g., in Figure 1, token “flight@1” indicates that an argument is missing for the predicate “flight”). In other cases, sketch tokens will not reveal the number of missing tokens (e.g., “STRING” in DJANGO) but the decoder’s output will indicate whether missing details have been generated (e.g., if the decoder emits a closing quote token for “STRING”). Moreover, type information in sketches can be used to constrain generation. In Table 1, sketch token “NUMBER” specifies that a numeric token should be emitted. For the missing details, we use the hidden vector ht to compute p (yt|y<t, x, a), analogously to Equations (7)–(10). 3.3 Training and Inference The model’s training objective is to maximize the log likelihood of the generated meaning representations given natural language expressions: max  (x,a,y)∈D log p (y|x, a) + log p (a|x) where D represents training pairs. At test time, the prediction for input x is obtained via ˆa = arg maxa′ p (a′|x) and ˆy = arg maxy′ p (y′|x, ˆa), where a′ and y′ represent coarse- and fine-grained meaning candidates. Because probabilities p (a|x) and p (y|x, a) are factorized as shown in Equations (2)–(3), we can obtain best results approximately by using greedy search to generate tokens one by one, rather than iterating over all candidates. 4 Semantic Parsing Tasks In order to show that our framework applies across domains and meaning representations, we developed models for three tasks, namely parsing natural language to logical form, to Python source code, and to SQL query. For each of these tasks we describe the datasets we used, how sketches were extracted, and specify model details over and above the architecture presented in Section 3. 4.1 Natural Language to Logical Form For our first task we used two benchmark datasets, namely GEO (880 language queries to a database of U.S. geography) and ATIS (5, 410 queries to a flight booking system). Examples are shown in Table 1 (see the first and second block). We used standard splits for both datasets: 600 training and 280 test instances for GEO (Zettlemoyer and Collins, 2005); 4, 480 training, 480 development, and 450 test examples for ATIS. Meaning representations in these datasets are based on λ-calculus (Kwiatkowski et al., 2011). We use brackets to linearize the hierarchical structure. 735 Algorithm 1 Sketch for GEO and ATIS Input: t: Tree-structure λ-calculus expression t.pred: Predicate name, or operator name Output: a: Meaning sketch ▷(count $0 (< (fare $0) 50:do))→(count#1 (< fare@1 ?)) function SKETCH(t) if t is leaf then ▷No nonterminal in arguments return “%s@%d” % (t.pred, len(t.args)) if t.pred is λ operator, or quantifier then ▷e.g., count Omit variable information defined by t.pred t.pred ←“%s#%d” % (t.pred, len(variable)) for c ←argument in t.args do if c is nonterminal then c ←SKETCH(c) else c ←“?” ▷Placeholder for terminal return t The first element between a pair of brackets is an operator or predicate name, and any remaining elements are its arguments. Algorithm 1 shows the pseudocode used to extract sketches from λ-calculus-based meaning representations. We strip off arguments and variable names in logical forms, while keeping predicates, operators, and composition information. We use the symbol “@” to denote the number of missing arguments in a predicate. For example, we extract “from@2” from the expression “(from $0 dallas:ci)” which indicates that the predicate “from” has two arguments. We use “?” as a placeholder in cases where only partial argument information can be omitted. We also omit variable information defined by the lambda operator and quantifiers (e.g., exists, count, and argmax). We use the symbol “#” to denote the number of omitted tokens. For the example in Figure 1, “lambda $0 e” is reduced to “lambda#2”. The meaning representations of these two datasets are highly compositional, which motivates us to utilize the hierarchical structure of λ-calculus. A similar idea is also explored in the tree decoders proposed in Dong and Lapata (2016) and Yin and Neubig (2017) where parent hidden states are fed to the input gate of the LSTM units. On the contrary, parent hidden states serve as input to the softmax classifiers of both fine and coarse meaning decoders. Parent Feeding Taking the meaning sketch “(and flight@1 from@2)” as an example, the parent of “from@2” is “(and”. Let pt denote the parent of the t-th time step in the decoder. Compared with Equation (10), we use the vector datt t and the hidden state of its parent dpt to compute the probability p (at|a<t, x) via: p (at|a<t, x) = softmaxat  Wo[datt t , dpt] + bo  where [·, ·] denotes vector concatenation. The parent feeding is used for both decoding stages. 4.2 Natural Language to Source Code Our second semantic parsing task used DJANGO (Oda et al., 2015), a dataset built upon the Python code of the Django library. The dataset contains lines of code paired with natural language expressions (see the third block in Table 1) and exhibits a variety of use cases, such as iteration, exception handling, and string manipulation. The original split has 16, 000 training, 1, 000 development, and 1, 805 test instances. We used the built-in lexical scanner of Python1 to tokenize the code and obtain token types. Sketches were extracted by substituting the original tokens with their token types, except delimiters (e.g., “[”, and “:”), operators (e.g., “+”, and “*”), and built-in keywords (e.g., “True”, and “while”). For instance, the expression “if s[:4].lower() == ’http’:” becomes “if NAME [ : NUMBER ] . NAME ( ) == STRING :”, with details about names, values, and strings being omitted. DJANGO is a diverse dataset, spanning various real-world use cases and as a result models are often faced with out-of-vocabulary (OOV) tokens (e.g., variable names, and numbers) that are unseen during training. We handle OOV tokens with a copying mechanism (Gu et al., 2016; Gulcehre et al., 2016; Jia and Liang, 2016), which allows the fine meaning decoder (Section 3.2) to directly copy tokens from the natural language input. Copying Mechanism Recall that we use a softmax classifier to predict the probability distribution p (yt|y<t, x, a) over the pre-defined vocabulary. We also learn a copying gate gt ∈[0, 1] to decide whether yt should be copied from the input or generated from the vocabulary. We compute the modified output distribution via: gt = sigmoid(wg · ht + bg) ˜p (yt|y<t, x, a) = (1 −gt)p (yt|y<t, x, a) + 1[yt /∈Vy]gt  k:xk=yt st,k 1https://docs.python.org/3/library/ tokenize 736 where wg ∈Rn and bg ∈R are parameters, and the indicator function 1[yt /∈Vy] is 1 only if yt is not in the target vocabulary Vy; the attention score st,k (see Equation (7)) measures how likely it is to copy yt from the input word xk. 4.3 Natural Language to SQL The WIKISQL (Zhong et al., 2017) dataset contains 80, 654 examples of questions and SQL queries distributed across 24, 241 tables from Wikipedia. The goal is to generate the correct SQL query for a natural language question and table schema (i.e., table column names), without using the content values of tables (see the last block in Table 1 for an example). The dataset is partitioned into a training set (70%), a development set (10%), and a test set (20%). Each table is present in one split to ensure generalization to unseen tables. WIKISQL queries follow the format “SELECT agg op agg col WHERE (cond col cond op cond) AND ...”, which is a subset of the SQL syntax. SELECT identifies the column that is to be included in the results after applying the aggregation operator agg op2 to column agg col. WHERE can have zero or multiple conditions, which means that column cond col must satisfy the constraints expressed by the operator cond op3 and the condition value cond. Sketches for SQL queries are simply the (sorted) sequences of condition operators cond op in WHERE clauses. For example, in Table 1, sketch “WHERE > AND =” has two condition operators, namely “>” and “=”. The generation of SQL queries differs from our previous semantic parsing tasks, in that the table schema serves as input in addition to natural language. We therefore modify our input encoder in order to render it table-aware, so to speak. Furthermore, due to the formulaic nature of the SQL query, we only use our decoder to generate the WHERE clause (with the help of sketches). The SELECT clause has a fixed number of slots (i.e., aggregation operator agg op and column agg col), which we straightforwardly predict with softmax classifiers (conditioned on the input). We briefly explain how these components are modeled below. Table-Aware Input Encoder Given a table schema with M columns, we employ the special token “∥” to concatenate its header names 2agg op ∈{empty, COUNT, MIN, MAX, SUM, AVG}. 3cond op ∈{=, <, >}. || of || || college number presidents Column 1 Column 2 ࢉଵ ࢉଶ ݔଶ ݔଷ ݔସ ࢋଵ ࢋଶ ࢋସ ࢋଷ ݔଵ Input Question Question-to-Table Attention ࢉଵ ࢋ ࢉଶ ࢋ ࢉସ ࢋ ࢉଷ ࢋ ̃݁ଵ ̃݁ଶ ̃݁ସ ̃݁ଷ LSTM units Vectors Attention Figure 2: Table-aware input encoder (left) and table column encoder (right) used for WIKISQL. as “∥c1,1 · · · c1,|c1|∥· · · ∥cM,1 · · · cM,|cM|∥”, where the k-th column (“ck,1 · · · ck,|ck|”) has |ck| words. As shown in Figure 2, we use bi-directional LSTMs to encode the whole sequence. Next, for column ck, the LSTM hidden states at positions ck,1 and ck,|ck| are concatenated. Finally, the concatenated vectors are used as the encoding vectors {ck}M k=1 for table columns. As mentioned earlier, the meaning representations of questions are dependent on the tables. As shown in Figure 2, we encode the input question x into {et}|x| t=1 using LSTM units. At each time step t, we use an attention mechanism towards table column vectors {ck}M k=1 to obtain the most relevant columns for et. The attention score from et to ck is computed via ut,k ∝exp{α(et) · α(ck)}, where α(·) is a one-layer neural network, and M k=1 ut,k = 1. Then we compute the context vector ce t = M k=1 ut,kck to summarize the relevant columns for et. We feed the concatenated vectors {[et, ce t]}|x| t=1 into a bi-directional LSTM encoder, and use the new encoding vectors {˜et}|x| t=1 to replace {et}|x| t=1 in other model components. We define the vector representation of input x as: ˜e = [−→˜e |x|, ←−˜e 1] (12) analogously to Equations (4)–(6). SELECT Clause We feed the question vector ˜e into a softmax classifier to obtain the aggregation operator agg op. If agg col is the k-th table column, its probability is computed via: σ(x) = w3 · tanh (W4x + b4) (13) p (agg col = k|x) ∝exp{σ([˜e, ck])} (14) where M j=1 p (agg col = j|x) = 1, σ(·) is a scoring network, and W4 ∈R2n×m, w3, b4 ∈ Rm are parameters. 737 ̃݁ WHERE < AND = ࢜ଵ ࢜ଶ ࢎଵ Column 4 ࢎସ ࢎଶ ࢎଷ AND cond_col Pointer ࢉସ cond Pointer ݔଶ ݔହ ̃݁ଶ ̃݁ହ … … Sketch-Guided WHERE Decoding Sketch Encoding Sketch Classification Figure 3: Fine meaning decoder of the WHERE clause used for WIKISQL. WHERE Clause We first generate sketches whose details are subsequently decorated by the fine meaning decoder described in Section 3.2. As the number of sketches in the training set is small (35 in total), we model sketch generation as a classification problem. We treat each sketch a as a category, and use a softmax classifier to compute p (a|x): p (a|x) = softmaxa (Wa˜e + ba) where Wa ∈R|Va|×n, ba ∈R|Va| are parameters, and ˜e is the table-aware input representation defined in Equation (12). Once the sketch is predicted, we know the condition operators and number of conditions in the WHERE clause which follows the format “WHERE (cond op cond col cond) AND ...”. As shown in Figure 3, our generation task now amounts to populating the sketch with condition columns cond col and their values cond. Let {ht}|y| t=1 denote the LSTM hidden states of the fine meaning decoder, and {hatt t }|y| t=1 the vectors obtained by the attention mechanism as in Equation (9). The condition column cond colyt is selected from the table’s headers. For the k-th column in the table, we compute p (cond colyt = k|y<t, x, a) as in Equation (14), but use different parameters and compute the score via σ([hatt t , ck]). If the k-th table column is selected, we use ck for the input of the next LSTM unit in the decoder. Condition values are typically mentioned in the input questions. These values are often phrases with multiple tokens (e.g., Mikhail Snitko in Table 1). We therefore propose to select a text span from input x for each condition value condyt rather than copying tokens one by one. Let xl · · · xr denote the text span from which condyt is copied. We factorize its probability as: p (condyt = xl · · · xr|y<t, x, a) = p  lL yt|y<t, x, a  p  rR yt|y<t, x, a, lL yt  p  lL yt|y<t, x, a  ∝exp{σ([hatt t , ˜el])} p  rR yt|y<t, x, a, lL yt  ∝exp{σ([hatt t , ˜el, ˜er])} where lL yt/rR yt represents the first/last copying index of condyt is l/r, the probabilities are normalized to 1, and σ(·) is the scoring network defined in Equation (13). Notice that we use different parameters for the scoring networks σ(·). The copied span is represented by the concatenated vector [˜el, ˜er], which is fed into a one-layer neural network and then used as the input to the next LSTM unit in the decoder. 5 Experiments We present results on the three semantic parsing tasks discussed in Section 4. Our implementation and pretrained models are available at https:// github.com/donglixp/coarse2fine. 5.1 Experimental Setup Preprocessing For GEO and ATIS, we used the preprocessed versions provided by Dong and Lapata (2016), where natural language expressions are lowercased and stemmed with NLTK (Bird et al., 2009), and entity mentions are replaced by numbered markers. We combined predicates and left brackets that indicate hierarchical structures to make meaning representations compact. We employed the preprocessed DJANGO data provided by Yin and Neubig (2017), where input expressions are tokenized by NLTK, and quoted strings in the input are replaced with place holders. WIKISQL was preprocessed by the script provided by Zhong et al. (2017), where inputs were lowercased and tokenized by Stanford CoreNLP (Manning et al., 2014). Configuration Model hyperparameters were cross-validated on the training set for GEO, and were validated on the development split for the other datasets. Dimensions of hidden vectors and word embeddings were selected from {250, 300} and {150, 200, 250, 300}, respectively. The dropout rate was selected from {0.3, 0.5}. Label smoothing (Szegedy et al., 2016) was employed for GEO and ATIS. The smoothing parameter was set to 0.1. For WIKISQL, the hidden size of σ(·) 738 Method GEO ATIS ZC07 (Zettlemoyer and Collins, 2007) 86.1 84.6 UBL (Kwiatkowksi et al., 2010) 87.9 71.4 FUBL (Kwiatkowski et al., 2011) 88.6 82.8 GUSP++ (Poon, 2013) — 83.5 KCAZ13 (Kwiatkowski et al., 2013) 89.0 — DCS+L (Liang et al., 2013) 87.9 — TISP (Zhao and Huang, 2015) 88.9 84.2 SEQ2SEQ (Dong and Lapata, 2016) 84.6 84.2 SEQ2TREE (Dong and Lapata, 2016) 87.1 84.6 ASN (Rabinovich et al., 2017) 85.7 85.3 ASN+SUPATT (Rabinovich et al., 2017) 87.1 85.9 ONESTAGE 85.0 85.3 COARSE2FINE 88.2 87.7 −sketch encoder 87.1 86.9 + oracle sketch 93.9 95.1 Table 2: Accuracies on GEO and ATIS. and α(·) in Equation (13) was set to 64. Word embeddings were initialized by GloVe (Pennington et al., 2014), and were shared by table encoder and input encoder in Section 4.3. We appended 10-dimensional part-of-speech tag vectors to embeddings of the question words in WIKISQL. The part-of-speech tags were obtained by the spaCy toolkit. We used the RMSProp optimizer (Tieleman and Hinton, 2012) to train the models. The learning rate was selected from {0.002, 0.005}. The batch size was 200 for WIKISQL, and was 64 for other datasets. Early stopping was used to determine the number of epochs. Evaluation We use accuracy as the evaluation metric, i.e., the percentage of the examples that are correctly parsed to their gold standard meaning representations. For WIKISQL, we also execute generated SQL queries on their corresponding tables, and report the execution accuracy which is defined as the proportion of correct answers. 5.2 Results and Analysis We compare our model (COARSE2FINE) against several previously published systems as well as various baselines. Specifically, we report results with a model which decodes meaning representations in one stage (ONESTAGE) without leveraging sketches. We also report the results of several ablation models, i.e., without a sketch encoder and without a table-aware input encoder. Table 2 presents our results on GEO and ATIS. Overall, we observe that COARSE2FINE outperforms ONESTAGE, which suggests that disentangling high-level from low-level information durMethod Accuracy Retrieval System 14.7 Phrasal SMT 31.5 Hierarchical SMT 9.5 SEQ2SEQ+UNK replacement 45.1 SEQ2TREE+UNK replacement 39.4 LPN+COPY (Ling et al., 2016) 62.3 SNM+COPY (Yin and Neubig, 2017) 71.6 ONESTAGE 69.5 COARSE2FINE 74.1 −sketch encoder 72.1 + oracle sketch 83.0 Table 3: DJANGO results. Accuracies in the first and second block are taken from Ling et al. (2016) and Yin and Neubig (2017). ing decoding is beneficial. The results also show that removing the sketch encoder harms performance since the decoder loses access to additional contextual information. Compared with previous neural models that utilize syntax or grammatical information (SEQ2TREE, ASN; the second block in Table 2), our method performs competitively despite the use of relatively simple decoders. As an upper bound, we report model accuracy when gold meaning sketches are given to the fine meaning decoder (+oracle sketch). As can be seen, predicting the sketch correctly boosts performance. The oracle results also indicate the accuracy of the fine meaning decoder. Table 3 reports results on DJANGO where we observe similar tendencies. COARSE2FINE outperforms ONESTAGE by a wide margin. It is also superior to the best reported result in the literature (SNM+COPY; see the second block in the table). Again we observe that the sketch encoder is beneficial and that there is an 8.9 point difference in accuracy between COARSE2FINE and the oracle. Results on WIKISQL are shown in Table 4. Our model is superior to ONESTAGE as well as to previous best performing systems. COARSE2FINE’s accuracies on aggregation agg op and agg col are 90.2% and 92.0%, respectively, which is comparable to SQLNET (Xu et al., 2017). So the most gain is obtained by the improved decoder of the WHERE clause. We also find that a tableaware input encoder is critical for doing well on this task, since the same question might lead to different SQL queries depending on the table schemas. Consider the question “how many presidents are graduated from A”. The SQL query over table “∥President∥College∥” is “SELECT 739 Method Accuracy Execution Accuracy SEQ2SEQ 23.4 35.9 Aug Ptr Network 43.3 53.3 SEQ2SQL (Zhong et al., 2017) 48.3 59.4 SQLNET (Xu et al., 2017) 61.3 68.0 ONESTAGE 68.8 75.9 COARSE2FINE 71.7 78.5 −sketch encoder 70.8 77.7 −table-aware input encoder 68.6 75.6 + oracle sketch 73.0 79.6 Table 4: Evaluation results on WIKISQL. Accuracies in the first block are taken from Zhong et al. (2017) and Xu et al. (2017). Method GEO ATIS DJANGO WIKISQL ONESTAGE 85.4 85.9 73.2 95.4 COARSE2FINE 89.3 88.0 77.4 95.9 Table 5: Sketch accuracy. For ONESTAGE, sketches are extracted from the meaning representations it generates. COUNT(President) WHERE (College = A)”, but the query over table “∥College∥Number of Presidents∥” would be “SELECT Number of Presidents WHERE (College = A)”. We also examine the predicted sketches themselves in Table 5. We compare sketches generated by COARSE2FINE against ONESTAGE. The latter model generates meaning representations without an intermediate sketch generation stage. Nevertheless, we can extract sketches from the output of ONESTAGE following the procedures described in Section 4. Sketches produced by COARSE2FINE are more accurate across the board. This is not surprising because our model is trained explicitly to generate compact meaning sketches. Taken together (Tables 2–4), our results show that better sketches bring accuracy gains on GEO, ATIS, and DJANGO. On WIKISQL, the sketches predicted by COARSE2FINE are marginally better compared with ONESTAGE. Performance improvements on this task are mainly due to the fine meaning decoder. We conjecture that by decomposing decoding into two stages, COARSE2FINE can better match table columns and extract condition values without interference from the prediction of condition operators. Moreover, the sketch provides a canonical order of condition operators, which is beneficial for the decoding process (Vinyals et al., 2016; Xu et al., 2017). 6 Conclusions In this paper we presented a coarse-to-fine decoding framework for neural semantic parsing. We first generate meaning sketches which abstract away from low-level information such as arguments and variable names and then predict missing details in order to obtain full meaning representations. The proposed framework can be easily adapted to different domains and meaning representations. Experimental results show that coarseto-fine decoding improves performance across tasks. In the future, we would like to apply the framework in a weakly supervised setting, i.e., to learn semantic parsers from question-answer pairs and to explore alternative ways of defining meaning sketches. Acknowledgments We would like to thank Pengcheng Yin for sharing with us the preprocessed version of the DJANGO dataset. We gratefully acknowledge the financial support of the European Research Council (award number 681760; Dong, Lapata) and the AdeptMind Scholar Fellowship program (Dong). References Alfred V Aho, Ravi Sethi, and Jeffrey D Ullman. 2007. Compilers: principles, techniques, and tools, volume 2. Addison-wesley Reading. David Alvarez-Melis and Tommi S Jaakkola. 2017. Tree-structured decoding with doubly-recurrent neural networks. In Proceedings of the 5th International Conference on Learning Representations, Toulon, France. Jacob Andreas, Andreas Vlachos, and Stephen Clark. 2013. Semantic parsing as machine translation. In Proceedings of the 51st Annual Meeting of the Association for Computational Linguistics, pages 47–52, Sofia, Bulgaria. Yoav Artzi and Luke Zettlemoyer. 2013. Weakly supervised learning of semantic parsers for mapping instructions to actions. Transactions of the Association of Computational Linguistics, 1:49–62. Dzmitry Bahdanau, Kyunghyun Cho, and Yoshua Bengio. 2015. Neural machine translation by jointly learning to align and translate. In Proceedings of the 3rd International Conference on Learning Representations, San Diego, California. Jonathan Berant, Andrew Chou, Roy Frostig, and Percy Liang. 2013. Semantic parsing on Freebase from question-answer pairs. In Proceedings of the 2013 740 Conference on Empirical Methods in Natural Language Processing, pages 1533–1544, Seattle, Washington. Association for Computational Linguistics. Steven Bird, Ewan Klein, and Edward Loper. 2009. Natural Language Processing with Python. O’Reilly Media. Eugene Charniak, Mark Johnson, Micha Elsner, Joseph Austerweil, David Ellis, Isaac Haxton, Catherine Hill, R. Shrivaths, Jeremy Moore, Michael Pozar, and Theresa Vu. 2006. Multilevel coarse-to-fine PCFG parsing. In Proceedings of the Human Language Technology Conference of the NAACL, pages 168–175, New York, NY. Jianpeng Cheng, Siva Reddy, Vijay Saraswat, and Mirella Lapata. 2017. Learning structured natural language representations for semantic parsing. In Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics, pages 44–55. Association for Computational Linguistics. Li Dong and Mirella Lapata. 2016. Language to logical form with neural attention. In Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics, pages 33–43, Berlin, Germany. Xing Fan, Emilio Monti, Lambert Mathias, and Markus Dreyer. 2017. Transfer learning for neural semantic parsing. In Proceedings of the 2nd Workshop on Representation Learning for NLP, pages 48–56, Vancouver, Canada. Yu Feng, Ruben Martins, Yuepeng Wang, Isil Dillig, and Thomas Reps. 2017. Component-based synthesis for complex apis. In Proceedings of the 44th ACM SIGPLAN Symposium on Principles of Programming Languages, POPL 2017, pages 599–612, New York, NY. Ruifang Ge and Raymond J. Mooney. 2005. A statistical semantic parser that integrates syntax and semantics. In Proceedings of the Ninth Conference on Computational Natural Language Learning, pages 9–16, Ann Arbor, Michigan. Jiatao Gu, Zhengdong Lu, Hang Li, and Victor O.K. Li. 2016. Incorporating copying mechanism in sequence-to-sequence learning. In Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics, pages 1631–1640, Berlin, Germany. Caglar Gulcehre, Sungjin Ahn, Ramesh Nallapati, Bowen Zhou, and Yoshua Bengio. 2016. Pointing the unknown words. In Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics, pages 140–149, Berlin, Germany. Association for Computational Linguistics. Jonathan Herzig and Jonathan Berant. 2017. Neural semantic parsing over multiple knowledge-bases. In Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics, pages 623– 628, Vancouver, Canada. Sepp Hochreiter and J¨urgen Schmidhuber. 1997. Long short-term memory. Neural Computation, 9:1735– 1780. Srinivasan Iyer, Ioannis Konstas, Alvin Cheung, Jayant Krishnamurthy, and Luke Zettlemoyer. 2017. Learning a neural semantic parser from user feedback. In Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics, pages 963–973, Vancouver, Canada. Robin Jia and Percy Liang. 2016. Data recombination for neural semantic parsing. In Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics, pages 12–22, Berlin, Germany. Tom´aˇs Koˇcisk´y, G´abor Melis, Edward Grefenstette, Chris Dyer, Wang Ling, Phil Blunsom, and Karl Moritz Hermann. 2016. Semantic parsing with semi-supervised sequential autoencoders. In Proceedings of the 2016 Conference on Empirical Methods in Natural Language Processing, pages 1078– 1087, Austin, Texas. Jayant Krishnamurthy, Pradeep Dasigi, and Matt Gardner. 2017. Neural semantic parsing with type constraints for semi-structured tables. In Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing, pages 1517–1527, Copenhagen, Denmark. Tom Kwiatkowksi, Luke Zettlemoyer, Sharon Goldwater, and Mark Steedman. 2010. Inducing probabilistic CCG grammars from logical form with higherorder unification. In Proceedings of the 2010 Conference on Empirical Methods in Natural Language Processing, pages 1223–1233, Cambridge, MA. Tom Kwiatkowski, Eunsol Choi, Yoav Artzi, and Luke Zettlemoyer. 2013. Scaling semantic parsers with on-the-fly ontology matching. In Proceedings of the 2013 Conference on Empirical Methods in Natural Language Processing, pages 1545–1556, Seattle, Washington. Tom Kwiatkowski, Luke Zettlemoyer, Sharon Goldwater, and Mark Steedman. 2011. Lexical generalization in CCG grammar induction for semantic parsing. In Proceedings of the 2011 Conference on Empirical Methods in Natural Language Processing, pages 1512–1523, Edinburgh, Scotland. Percy Liang, Michael I. Jordan, and Dan Klein. 2013. Learning dependency-based compositional semantics. Computational Linguistics, 39(2). Wang Ling, Phil Blunsom, Edward Grefenstette, Karl Moritz Hermann, Tom´aˇs Koˇcisk´y, Fumin Wang, and Andrew Senior. 2016. Latent predictor networks for code generation. In Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics, pages 599–609, Berlin, Germany. 741 Wei Lu, Hwee Tou Ng, Wee Sun Lee, and Luke Zettlemoyer. 2008. A generative model for parsing natural language to meaning representations. In Proceedings of the 2008 Conference on Empirical Methods in Natural Language Processing, pages 783–792, Honolulu, Hawaii. Thang Luong, Hieu Pham, and Christopher D. Manning. 2015. Effective approaches to attention-based neural machine translation. In Proceedings of the 2015 Conference on Empirical Methods in Natural Language Processing, pages 1412–1421, Lisbon, Portugal. Christopher D. Manning, Mihai Surdeanu, John Bauer, Jenny Finkel, Steven J. Bethard, and David McClosky. 2014. The Stanford CoreNLP natural language processing toolkit. In Association for Computational Linguistics System Demonstrations, pages 55–60, Baltimore, Maryland. Yusuke Oda, Hiroyuki Fudaba, Graham Neubig, Hideaki Hata, Sakriani Sakti, Tomoki Toda, and Satoshi Nakamura. 2015. Learning to generate pseudo-code from source code using statistical machine translation. In Proceedings of the 2015 30th IEEE/ACM International Conference on Automated Software Engineering, pages 574–584, Washington, DC. Jeffrey Pennington, Richard Socher, and Christopher Manning. 2014. GloVe: Global vectors for word representation. In Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing, pages 1532–1543, Doha, Qatar. Slav Petrov. 2011. Coarse-to-fine natural language processing. Springer Science & Business Media. Hoifung Poon. 2013. Grounded unsupervised semantic parsing. In Proceedings of the 51st Annual Meeting of the Association for Computational Linguistics, pages 933–943, Sofia, Bulgaria. Maxim Rabinovich, Mitchell Stern, and Dan Klein. 2017. Abstract syntax networks for code generation and semantic parsing. In Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics, pages 1139–1149, Vancouver, Canada. Armando Solar-Lezama. 2008. Program Synthesis by Sketching. Ph.D. thesis, University of California at Berkeley, Berkeley, CA. Raymond Hendy Susanto and Wei Lu. 2017. Neural architectures for multilingual semantic parsing. In Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics, pages 38–44, Vancouver, Canada. Christian Szegedy, Vincent Vanhoucke, Sergey Ioffe, Jon Shlens, and Zbigniew Wojna. 2016. Rethinking the inception architecture for computer vision. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 2818–2826. Lappoon R. Tang and Raymond J. Mooney. 2000. Automated construction of database interfaces: Intergrating statistical and relational learning for semantic parsing. In 2000 Joint SIGDAT Conference on Empirical Methods in Natural Language Processing and Very Large Corpora, pages 133–141, Hong Kong, China. T. Tieleman and G. Hinton. 2012. Lecture 6.5— RMSProp: Divide the gradient by a running average of its recent magnitude. Technical report. Oriol Vinyals, Samy Bengio, and Manjunath Kudlur. 2016. Order matters: Sequence to sequence for sets. In Proceedings of the 4th International Conference on Learning Representations, San Juan, Puerto Rico. Oriol Vinyals, Lukasz Kaiser, Terry Koo, Slav Petrov, Ilya Sutskever, and Geoffrey Hinton. 2015. Grammar as a foreign language. In Proceedings of the 28th International Conference on Neural Information Processing Systems, pages 2773–2781, Montreal, Canada. Yuk Wah Wong and Raymond Mooney. 2007. Learning synchronous grammars for semantic parsing with lambda calculus. In Proceedings of the 45th Annual Meeting of the Association of Computational Linguistics, pages 960–967, Prague, Czech Republic. Chunyang Xiao, Marc Dymetman, and Claire Gardent. 2016. Sequence-based structured prediction for semantic parsing. In Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics, pages 1341–1350, Berlin, Germany. Xiaojun Xu, Chang Liu, and Dawn Song. 2017. SQLNet: Generating structured queries from natural language without reinforcement learning. arXiv preprint arXiv:1711.04436. Navid Yaghmazadeh, Yuepeng Wang, Isil Dillig, and Thomas Dillig. 2017. SQLizer: Query synthesis from natural language. Proceedings of the ACM on Programming Languages, 1:63:1–63:26. Pengcheng Yin and Graham Neubig. 2017. A syntactic neural model for general-purpose code generation. In Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics, pages 440– 450, Vancouver, Canada. Luke Zettlemoyer and Michael Collins. 2005. Learning to map sentences to logical form: Structured classification with probabilistic categorial grammars. In Proceedings of the 21st Conference on Uncertainty in Artificial Intelligence, pages 658–666, Edinburgh, Scotland. Luke Zettlemoyer and Michael Collins. 2007. Online learning of relaxed CCG grammars for parsing to logical form. In Proceedings of the 2007 Joint Conference on Empirical Methods in Natural Language Processing and Computational Natural Language Learning, pages 678–687, Prague, Czech Republic. 742 Sai Zhang and Yuyin Sun. 2013. Automatically synthesizing SQL queries from input-output examples. In Proceedings of the 28th IEEE/ACM International Conference on Automated Software Engineering, pages 224–234, Piscataway, NJ. Yuchen Zhang, Panupong Pasupat, and Percy Liang. 2017. Macro grammars and holistic triggering for efficient semantic parsing. In Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing, pages 1214–1223. Association for Computational Linguistics. Kai Zhao and Liang Huang. 2015. Type-driven incremental semantic parsing with polymorphism. In Proceedings of the 2015 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 1416–1421, Denver, Colorado. Victor Zhong, Caiming Xiong, and Richard Socher. 2017. Seq2SQL: Generating structured queries from natural language using reinforcement learning. arXiv preprint arXiv:1709.00103.
2018
68
Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Long Papers), pages 743–753 Melbourne, Australia, July 15 - 20, 2018. c⃝2018 Association for Computational Linguistics 743 Confidence Modeling for Neural Semantic Parsing Li Dong†∗and Chris Quirk‡ and Mirella Lapata† † School of Informatics, University of Edinburgh ‡ Microsoft Research, Redmond [email protected] [email protected] [email protected] Abstract In this work we focus on confidence modeling for neural semantic parsers which are built upon sequence-to-sequence models. We outline three major causes of uncertainty, and design various metrics to quantify these factors. These metrics are then used to estimate confidence scores that indicate whether model predictions are likely to be correct. Beyond confidence estimation, we identify which parts of the input contribute to uncertain predictions allowing users to interpret their model, and verify or refine its input. Experimental results show that our confidence model significantly outperforms a widely used method that relies on posterior probability, and improves the quality of interpretation compared to simply relying on attention scores. 1 Introduction Semantic parsing aims to map natural language text to a formal meaning representation (e.g., logical forms or SQL queries). The neural sequenceto-sequence architecture (Sutskever et al., 2014; Bahdanau et al., 2015) has been widely adopted in a variety of natural language processing tasks, and semantic parsing is no exception. However, despite achieving promising results (Dong and Lapata, 2016; Jia and Liang, 2016; Ling et al., 2016), neural semantic parsers remain difficult to interpret, acting in most cases as a black box, not providing any information about what made them arrive at a particular decision. In this work, we explore ways to estimate and interpret the ∗Work carried out during an internship at Microsoft Research. model’s confidence in its predictions, which we argue can provide users with immediate and meaningful feedback regarding uncertain outputs. An explicit framework for confidence modeling would benefit the development cycle of neural semantic parsers which, contrary to more traditional methods, do not make use of lexicons or templates and as a result the sources of errors and inconsistencies are difficult to trace. Moreover, from the perspective of application, semantic parsing is often used to build natural language interfaces, such as dialogue systems. In this case it is important to know whether the system understands the input queries with high confidence in order to make decisions more reliably. For example, knowing that some of the predictions are uncertain would allow the system to generate clarification questions, prompting users to verify the results before triggering unwanted actions. In addition, the training data used for semantic parsing can be small and noisy, and as a result, models do indeed produce uncertain outputs, which we would like our framework to identify. A widely-used confidence scoring method is based on posterior probabilities p (y|x) where x is the input and y the model’s prediction. For a linear model, this method makes sense: as more positive evidence is gathered, the score becomes larger. Neural models, in contrast, learn a complicated function that often overfits the training data. Posterior probability is effective when making decisions about model output, but is no longer a good indicator of confidence due in part to the nonlinearity of neural networks (Johansen and Socher, 2017). This observation motivates us to develop a confidence modeling framework for sequenceto-sequence models. We categorize the causes of uncertainty into three types, namely model uncertainty, data uncertainty, and input uncertainty and design different metrics to characterize them. 744 We compute these confidence metrics for a given prediction and use them as features in a regression model which is trained on held-out data to fit prediction F1 scores. At test time, the regression model’s outputs are used as confidence scores. Our approach does not interfere with the training of the model, and can be thus applied to various architectures, without sacrificing test accuracy. Furthermore, we propose a method based on backpropagation which allows to interpret model behavior by identifying which parts of the input contribute to uncertain predictions. Experimental results on two semantic parsing datasets (IFTTT, Quirk et al. 2015; and DJANGO, Oda et al. 2015) show that our model is superior to a method based on posterior probability. We also demonstrate that thresholding confidence scores achieves a good trade-off between coverage and accuracy. Moreover, the proposed uncertainty backpropagation method yields results which are qualitatively more interpretable compared to those based on attention scores. 2 Related Work Confidence Estimation Confidence estimation has been studied in the context of a few NLP tasks, such as statistical machine translation (Blatz et al., 2004; Ueffing and Ney, 2005; Soricut and Echihabi, 2010), and question answering (Gondek et al., 2012). To the best of our knowledge, confidence modeling for semantic parsing remains largely unexplored. A common scheme for modeling uncertainty in neural networks is to place distributions over the network’s weights (Denker and Lecun, 1991; MacKay, 1992; Neal, 1996; Blundell et al., 2015; Gan et al., 2017). But the resulting models often contain more parameters, and the training process has to be accordingly changed, which makes these approaches difficult to work with. Gal and Ghahramani (2016) develop a theoretical framework which shows that the use of dropout in neural networks can be interpreted as a Bayesian approximation of Gaussian Process. We adapt their framework so as to represent uncertainty in the encoder-decoder architectures, and extend it by adding Gaussian noise to weights. Semantic Parsing Various methods have been developed to learn a semantic parser from natural language descriptions paired with meaning representations (Tang and Mooney, 2000; Zettlemoyer and Collins, 2007; Lu et al., 2008; Kwiatkowski et al., 2011; Andreas et al., 2013; Zhao and Huang, 2015). More recently, a few sequence-to-sequence models have been proposed for semantic parsing (Dong and Lapata, 2016; Jia and Liang, 2016; Ling et al., 2016) and shown to perform competitively whilst eschewing the use of templates or manually designed features. There have been several efforts to improve these models including the use of a tree decoder (Dong and Lapata, 2016), data augmentation (Jia and Liang, 2016; Koˇcisk´y et al., 2016), the use of a grammar model (Xiao et al., 2016; Rabinovich et al., 2017; Yin and Neubig, 2017; Krishnamurthy et al., 2017), coarse-tofine decoding (Dong and Lapata, 2018), network sharing (Susanto and Lu, 2017; Herzig and Berant, 2017), user feedback (Iyer et al., 2017), and transfer learning (Fan et al., 2017). Current semantic parsers will by default generate some output for a given input even if this is just a random guess. System results can thus be somewhat unexpected inadvertently affecting user experience. Our goal is to mitigate these issues with a confidence scoring model that can estimate how likely the prediction is correct. 3 Neural Semantic Parsing Model In the following section we describe the neural semantic parsing model (Dong and Lapata, 2016; Jia and Liang, 2016; Ling et al., 2016) we assume throughout this paper. The model is built upon the sequence-to-sequence architecture and is illustrated in Figure 1. An encoder is used to encode natural language input q = q1 · · · q|q| into a vector representation, and a decoder learns to generate a logical form representation of its meaning a = a1 · · · a|a| conditioned on the encoding vectors. The encoder and decoder are two different recurrent neural networks with long short-term memory units (LSTMs; Hochreiter and Schmidhuber 1997) which process tokens sequentially. The probability of generating the whole sequence p (a|q) is factorized as: p (a|q) = |a|  t=1 p (at|a<t, q) (1) where a<t = a1 · · · at−1. Let et ∈ Rn denote the hidden vector of the encoder at time step t. It is computed via et = fLSTM (et−1, qt), where fLSTM refers to the LSTM unit, and qt ∈Rn is the word embedding 745 ࢋଵ … ࢋȁ௤ȁ ࢊଶ ࢋଶ ࢊଵ ƒ––‡–‹‘ ࢊȁ௔ȁ … ݍଵ ݍଶ … ݍȁ௤ȁ <s> ܽଵ … ܽ௔ିଵ ܽଵ … … ܽȁ௔ȁ i) iii) i) ii) iv) Figure 1: We use dropout as approximate Bayesian inference to obtain model uncertainty. The dropout layers are applied to i) token vectors; ii) the encoder’s output vectors; iii) bridge vectors; and iv) decoding vectors. of qt. Once the tokens of the input sequence are encoded into vectors, e|q| is used to initialize the hidden states of the first time step in the decoder. Similarly, the hidden vector of the decoder at time step t is computed by dt = fLSTM (dt−1, at−1), where at−1 ∈Rn is the word vector of the previously predicted token. Additionally, we use an attention mechanism (Luong et al., 2015a) to utilize relevant encoder-side context. For the current time step t of the decoder, we compute its attention score with the k-th hidden state in the encoder as: rt,k ∝exp{dt · ek} (2) where |q| j=1 rt,j = 1. The probability of generating at is computed via: ct = |q|  k=1 rt,kek (3) datt t = tanh (W1dt + W2ct) (4) p (at|a<t, q) = softmaxat  Wodatt t  (5) where W1, W2 ∈Rn×n and Wo ∈R|Va|×n are three parameter matrices. The training objective is to maximize the likelihood of the generated meaning representation a given input q, i.e., maximize  (q,a)∈D log p (a|q), where D represents training pairs. At test time, the model’s prediction for input q is obtained via ˆa = arg maxa′ p (a′|q), where a′ represents candidate outputs. Because p (a|q) is factorized as shown in Equation (1), we can use beam search to generate tokens one by one rather than iterating over all possible results. 4 Confidence Estimation Given input q and its predicted meaning representation a, the confidence model estimates Algorithm 1 Dropout Perturbation Input: q, a: Input and its prediction M: Model parameters 1: for i ←1, · · · , F do 2: ˆ Mi ←Apply dropout layers to M ▷Figure 1 3: Run forward pass and compute ˆp(a|q; ˆ Mi) 4: Compute variance of {ˆp(a|q; ˆ Mi)}F i=1 ▷Equation (6) score s (q, a) ∈(0, 1). A large score indicates the model is confident that its prediction is correct. In order to gauge confidence, we need to estimate “what we do not know”. To this end, we identify three causes of uncertainty, and design various metrics characterizing each one of them. We then feed these metrics into a regression model in order to predict s (q, a). 4.1 Model Uncertainty The model’s parameters or structures contain uncertainty, which makes the model less confident about the values of p (a|q). For example, noise in the training data and the stochastic learning algorithm itself can result in model uncertainty. We describe metrics for capturing uncertainty below: Dropout Perturbation Our first metric uses dropout (Srivastava et al., 2014) as approximate Bayesian inference to estimate model uncertainty (Gal and Ghahramani, 2016). Dropout is a widely used regularization technique during training, which relieves overfitting by randomly masking some input neurons to zero according to a Bernoulli distribution. In our work, we use dropout at test time, instead. As shown in Algorithm 1, we perform F forward passes through the network, and collect the results {ˆp(a|q; ˆ Mi)}F i=1 where ˆ Mi represents the perturbed parameters. Then, the uncertainty metric is computed by the variance of results. We define the metric on the sequence level as: var{ˆp(a|q; ˆ Mi)}F i=1. (6) In addition, we compute uncertainty uat at the token-level at via: uat = var{ˆp(at|a<t, q; ˆ Mi)}F i=1 (7) where ˆp(at|a<t, q; ˆ Mi) is the probability of generating token at (Equation (5)) using perturbed model ˆ Mi. We operationalize tokenlevel uncertainty in two ways, as the average score avg{uat}|a| t=1 and the maximum score 746 max{uat}|a| t=1 (since the uncertainty of a sequence is often determined by the most uncertain token). As shown in Figure 1, we add dropout layers in i) the word vectors of the encoder and decoder qt, at; ii) the output vectors of the encoder et; iii) bridge vectors e|q| used to initialize the hidden states of the first time step in the decoder; and iv) decoding vectors datt t (Equation (4)). Gaussian Noise Standard dropout can be viewed as applying noise sampled from a Bernoulli distribution to the network parameters. We instead use Gaussian noise, and apply the metrics in the same way discussed above. Let v denote a vector perturbed by noise, and g a vector sampled from the Gaussian distribution N(0, σ2). We use ˆv = v + g and ˆv = v + v ⊙g as two noise injection methods. Intuitively, if the model is more confident in an example, it should be more robust to perturbations. Posterior Probability Our last class of metrics is based on posterior probability. We use the log probability log p(a|q) as a sequence-level metric. The token-level metric min{p(at|a<t, q)}|a| t=1 can identify the most uncertain predicted token. The perplexity per token −1 |a| |a| t=1 log p (at|a<t, q) is also employed. 4.2 Data Uncertainty The coverage of training data also affects the uncertainty of predictions. If the input q does not match the training distribution or contains unknown words, it is difficult to predict p (a|q) reliably. We define two metrics: Probability of Input We train a language model on the training data, and use it to estimate the probability of input p(q|D) where D represents the training data. Number of Unknown Tokens Tokens that do not appear in the training data harm robustness, and lead to uncertainty. So, we use the number of unknown tokens in the input q as a metric. 4.3 Input Uncertainty Even if the model can estimate p (a|q) reliably, the input itself may be ambiguous. For instance, the input the flight is at 9 o’clock can be interpreted as either flight time(9am) or flight time(9pm). Selecting between these predictions is difficult, especially if they are both highly likely. We use the following metrics to measure uncertainty caused by ambiguous inputs. Variance of Top Candidates We use the variance of the probability of the top candidates to indicate whether these are similar. The sequencelevel metric is computed by: var{p(ai|q)}K i=1 where a1 . . . aK are the K-best predictions obtained by the beam search during inference (Section 3). Entropy of Decoding The sequence-level entropy of the decoding process is computed via: H[a|q] = −  a′ p(a′|q) log p(a′|q) which we approximate by Monte Carlo sampling rather than iterating over all candidate predictions. The token-level metrics of decoding entropy are computed by avg{H[at|a<t, q]}|a| t=1 and max{H[at|a<t, q]}|a| t=1. 4.4 Confidence Scoring The sentence- and token-level confidence metrics defined in Section 4 are fed into a gradient tree boosting model (Chen and Guestrin, 2016) in order to predict the overall confidence score s (q, a). The model is wrapped with a logistic function so that confidence scores are in the range of (0, 1). Because the confidence score indicates whether the prediction is likely to be correct, we can use the prediction’s F1 (see Section 6.2) as target value. The training loss is defined as:  (q,a)∈D ln(1+e−ˆs(q,a))yq,a+ ln(1+eˆs(q,a))(1−yq,a) where D represents the data, yq,a is the target F1 score, and ˆs(q, a) the predicted confidence score. We refer readers to Chen and Guestrin (2016) for mathematical details of how the gradient tree boosting model is trained. Notice that we learn the confidence scoring model on the held-out set (rather than on the training data of the semantic parser) to avoid overfitting. 5 Uncertainty Interpretation Confidence scores are useful in so far they can be traced back to the inputs causing the uncertainty in the first place. For semantic parsing, identifying 747 ݉ ݌ଵ ݌ଶ ܿଵ ܿଶ ݒ௣భ ௠ ݒ௣మ ௠ ݒ௠ ௖భ ݒ௠ ௖మ Backpropagation ƒ”‡– ݉ൌሼ݌ଵǡ ݌ଶሽ Ћކ ݉ൌሼܿଵǡ ܿଶሽ ݑ௠: score of neuron ݉ ݒ௠ ௖భ: contribution ratio (from ܿଵto ݉) Figure 2: Uncertainty backpropagation at the neuron level. Neuron m’s score um is collected from child neurons c1 and c2 by um = vc1 muc1 + vc2 muc2. The score um is then redistributed to its parent neurons p1 and p2, which satisfies vm p1 + vm p2 = 1. which input words contribute to uncertainty would be of value, e.g., these could be treated explicitly as special cases or refined if they represent noise. In this section, we introduce an algorithm that backpropagates token-level uncertainty scores (see Equation (7)) from predictions to input tokens, following the ideas of Bach et al. (2015) and Zhang et al. (2016). Let um denote neuron m’s uncertainty score, which indicates the degree to which it contributes to uncertainty. As shown in Figure 2, um is computed by the summation of the scores backpropagated from its child neurons: um =  c∈Child(m) vc muc where Child(m) is the set of m’s child neurons, and the non-negative contribution ratio vc m indicates how much we backpropagate uc to neuron m. Intuitively, if neuron m contributes more to c’s value, ratio vc m should be larger. After obtaining score um, we redistribute it to its parent neurons in the same way. Contribution ratios from m to its parent neurons are normalized to 1:  p∈Parent(m) vm p = 1 where Parent(m) is the set of m’s parent neurons. Given the above constraints, we now define different backpropagation rules for the operators used in neural networks. We first describe the rules used for fully-connected layers. Let x denote the input. The output is computed by z = σ(Wx+b), where σ is a nonlinear function, W ∈R|z|∗|x| is the weight matrix, b ∈R|z| is the bias, and neuron zi is computed via zi = σ(|x| j=1 Wi,jxj + bi). Neuron xk’s uncertainty score uxk is gathAlgorithm 2 Uncertainty Interpretation Input: q, a: Input and its prediction Output: {ˆuqt}|q| t=1: Interpretation scores for input tokens Function: TokenUnc: Get token-level uncertainty 1: ▷Get token-level uncertainty for predicted tokens 2: {uat}|a| t=1 ←TokenUnc(q, a) 3: ▷Initialize uncertainty scores for backpropagation 4: for t ←1, · · · , |a| do 5: Decoder classifier’s output neuron ←uat 6: ▷Run backpropagation 7: for m ←neuron in backward topological order do 8: ▷Gather scores from child neurons 9: um ← c∈Child(m) vc muc 10: ▷Summarize scores for input words 11: for t ←1, · · · , |q| do 12: uqt ← c∈qt uc 13: {ˆuqt}|q| t=1 ←normalize {uqt}|q| t=1 ered from the next layer: uxk = |z|  i=1 vzi xkuzi = |z|  i=1 |Wi,kxk| |x| j=1 |Wi,jxj| uzi ignoring the nonlinear function σ and the bias b. The ratio vzi xk is proportional to the contribution of xk to the value of zi. We define backpropagation rules for elementwise vector operators. For z = x ± y, these are: uxk = |xk| |xk|+|yk|uzk uyk = |yk| |xk|+|yk|uzk where the contribution ratios vzk xk and vzk yk are determined by |xk| and |yk|. For multiplication, the contribution of two elements in 1 3 ∗3 should be the same. So, the propagation rules for z = x⊙y are: uxk= | log |xk|| | log |xk||+| log |yk||uzk uyk= | log |yk|| | log |xk||+| log |yk||uzk where the contribution ratios are determined by | log |xk|| and | log |yk||. For scalar multiplication, z = λx where λ denotes a constant. We directly assign z’s uncertainty scores to x and the backpropagation rule is uxk = uzk. As shown in Algorithm 2, we first initialize uncertainty backpropagation in the decoder (lines 1–5). For each predicted token at, we compute its uncertainty score uat as in Equation (7). Next, we find the dimension of at in the decoder’s softmax classifier (Equation (5)), and initialize the neuron with the uncertainty score uat. We then backpropagate these uncertainty scores through 748 Dataset Example IFTTT turn android phone to full volume at 7am monday to friday date time−every day of the week at−((time of day (07)(:)(00)) (days of the week (1)(2)(3)(4)(5))) THEN android device−set ringtone volume−(volume ({’ volume level’:1.0,’name’:’100%’})) DJANGO for every key in sorted list of user settings for key in sorted(user settings): Table 1: Natural language descriptions and their meaning representations from IFTTT and DJANGO. the network (lines 6–9), and finally into the neurons of the input words. We summarize them and compute the token-level scores for interpreting the results (line 10–13). For input word vector qt, we use the summation of its neuron-level scores as the token-level score: ˆuqt ∝  c∈qt uc where c ∈qt represents the neurons of word vector qt, and |q| t=1 ˆuqt = 1. We use the normalized score ˆuqt to indicate token qt’s contribution to prediction uncertainty. 6 Experiments In this section we describe the datasets used in our experiments and various details concerning our models. We present our experimental results and analysis of model behavior. Our code is publicly available at https://github.com/ donglixp/confidence. 6.1 Datasets We trained the neural semantic parser introduced in Section 3 on two datasets covering different domains and meaning representations. Examples are shown in Table 1. IFTTT This dataset (Quirk et al., 2015) contains a large number of if-this-then-that programs crawled from the IFTTT website. The programs are written for various applications, such as home security (e.g., “email me if the window opens”), and task automation (e.g., “save instagram photos to dropbox”). Whenever a program’s trigger is satisfied, an action is performed. Triggers and actions represent functions with arguments; they are selected from different channels (160 in total) representing various services (e.g., Android). There are 552 trigger functions and 229 action functions. The original split contains 77, 495 training, 5, 171 development, and 4, 294 test instances. The subset that removes non-English descriptions was used in our experiments. DJANGO This dataset (Oda et al., 2015) is built upon the code of the Django web framework. Each line of Python code has a manually annotated natural language description. Our goal is to map the English pseudo-code to Python statements. This dataset contains diverse use cases, such as iteration, exception handling, and string manipulation. The original split has 16, 000 training, 1, 000 development, and 1, 805 test examples. 6.2 Settings We followed the data preprocessing used in previous work (Dong and Lapata, 2016; Yin and Neubig, 2017). Input sentences were tokenized using NLTK (Bird et al., 2009) and lowercased. We filtered words that appeared less than four times in the training set. Numbers and URLs in IFTTT and quoted strings in DJANGO were replaced with place holders. Hyperparameters of the semantic parsers were validated on the development set. The learning rate and the smoothing constant of RMSProp (Tieleman and Hinton, 2012) were 0.002 and 0.95, respectively. The dropout rate was 0.25. A two-layer LSTM was used for IFTTT, while a one-layer LSTM was employed for DJANGO. Dimensions for the word embedding and hidden vector were selected from {150, 250}. The beam size during decoding was 5. For IFTTT, we view the predicted trees as a set of productions, and use balanced F1 as evaluation metric (Quirk et al., 2015). We do not measure accuracy because the dataset is very noisy and there rarely is an exact match between the predicted output and the gold standard. The F1 score of our neural semantic parser is 50.1%, which is comparable to Dong and Lapata (2016). For DJANGO, we measure the fraction of exact matches, where F1 score is equal to accuracy. Because there are unseen variable names at test time, we use attention scores as alignments to replace unknown to749 Method IFTTT DJANGO POSTERIOR 0.477 0.694 CONF 0.625 0.793 −MODEL 0.595 0.759 −DATA 0.610 0.787 −INPUT 0.608 0.785 Table 2: Spearman ρ correlation between confidence scores and F1. Best results are shown in bold. All correlations are significant at p < 0.01. kens in the prediction with the input words they align to (Luong et al., 2015b). The accuracy of our parser is 53.7%, which is better than the result (45.1%) of the sequence-to-sequence model reported in Yin and Neubig (2017). To estimate model uncertainty, we set dropout rate to 0.1, and performed 30 inference passes. The standard deviation of Gaussian noise was 0.05. The language model was estimated using KenLM (Heafield et al., 2013). For input uncertainty, we computed variance for the 10-best candidates. The confidence metrics were implemented in batch mode, to take full advantage of GPUs. Hyperparameters of the confidence scoring model were cross-validated. The number of boosted trees was selected from {20, 50}. The maximum tree depth was selected from {3, 4, 5}. We set the subsample ratio to 0.8. All other hyperparameters in XGBoost (Chen and Guestrin, 2016) were left with their default values. 6.3 Results Confidence Estimation We compare our approach (CONF) against confidence scores based on posterior probability p(a|q) (POSTERIOR). We also report the results of three ablation variants (−MODEL, −DATA, −INPUT) by removing each group of confidence metrics described in Section 4. We measure the relationship between confidence scores and F1 using Spearman’s ρ correlation coefficient which varies between −1 and 1 (0 implies there is no correlation). High ρ indicates that the confidence scores are high for correct predictions and low otherwise. As shown in Table 2, our method CONF outperforms POSTERIOR by a large margin. The ablation results indicate that model uncertainty plays the most important role among the confidence metrics. In contrast, removing the metrics of data uncertainty affects performance less, because most examples in the datasets are in-domain. ImproveF1 Dout Noise PR PPL LM #UNK Var Dout 0.59 Noise 0.59 0.90 PR 0.52 0.84 0.82 PPL 0.48 0.78 0.78 0.89 LM 0.30 0.26 0.32 0.27 0.25 #UNK 0.27 0.31 0.33 0.29 0.25 0.32 Var 0.49 0.83 0.78 0.88 0.79 0.25 0.27 Ent 0.53 0.78 0.78 0.80 0.75 0.27 0.30 0.76 Table 3: Correlation matrix for F1 and individual confidence metrics on the IFTTT dataset. All correlations are significant at p < 0.01. Best predictors are shown in bold. Dout is short for dropout, PR for posterior probability, PPL for perplexity, LM for probability based on a language model, #UNK for number of unknown tokens, Var for variance of top candidates, and Ent for Entropy. F1 Dout Noise PR PPL LM #UNK Var Dout 0.76 Noise 0.78 0.94 PR 0.73 0.89 0.90 PPL 0.64 0.80 0.81 0.84 LM 0.32 0.41 0.40 0.38 0.30 #UNK 0.27 0.28 0.28 0.26 0.19 0.35 Var 0.70 0.87 0.87 0.89 0.87 0.37 0.23 Ent 0.72 0.89 0.90 0.92 0.86 0.38 0.26 0.90 Table 4: Correlation matrix for F1 and individual confidence metrics on the DJANGO dataset. All correlations are significant at p < 0.01. Best predictors are shown in bold. Same shorthands apply as in Table 3. ments for each group of metrics are significant with p < 0.05 according to bootstrap hypothesis testing (Efron and Tibshirani, 1994). Tables 3 and 4 show the correlation matrix for F1 and individual confidence metrics on the IFTTT and DJANGO datasets, respectively. As can be seen, metrics representing model uncertainty and input uncertainty are more correlated to each other compared with metrics capturing data uncertainty. Perhaps unsurprisingly metrics of the same group are highly inter-correlated since they model the same type of uncertainty. Table 5 shows the relative importance of individual metrics in the regression model. As importance score we use the average gain (i.e., loss reduction) brought by the confidence metric once added as feature to the branch of the decision tree (Chen and Guestrin, 2016). The results indicate that model uncertainty (Noise/Dropout/Posterior/Perplexity) plays 750 Metric Dout Noise PR PPL LM #UNK Var Ent IFTTT 0.39 1.00 0.89 0.27 0.26 0.46 0.43 0.34 DJANGO 1.00 0.59 0.22 0.58 0.49 0.14 0.24 0.25 Table 5: Importance scores of confidence metrics (normalized by maximum value on each dataset). Best results are shown in bold. Same shorthands apply as in Table 3. the most important role. On IFTTT, the number of unknown tokens (#UNK) and the variance of top candidates (var(K-best)) are also very helpful because this dataset is relatively noisy and contains many ambiguous inputs. Finally, in real-world applications, confidence scores are often used as a threshold to trade-off precision for coverage. Figure 3 shows how F1 score varies as we increase the confidence threshold, i.e., reduce the proportion of examples that we return answers for. F1 score improves monotonically for POSTERIOR and our method, which, however, achieves better performance when coverage is the same. Uncertainty Interpretation We next evaluate how our backpropagation method (see Section 5) allows us to identify input tokens contributing to uncertainty. We compare against a method that interprets uncertainty based on the attention mechanism (ATTENTION). As shown in Equation (2), attention scores rt,k can be used as soft alignments between the time step t of the decoder and the k-th input token. We compute the normalized uncertainty score ˆuqt for a token qt via: ˆuqt ∝ |a|  t=1 rt,kuat (8) where uat is the uncertainty score of the predicted token at (Equation (7)), and |q| t=1 ˆuqt = 1. Unfortunately, the evaluation of uncertainty interpretation methods is problematic. For our semantic parsing task, we do not a priori know which tokens in the natural language input contribute to uncertainty and these may vary depending on the architecture used, model parameters, and so on. We work around this problem by creating a proxy gold standard. We inject noise to the vectors representing tokens in the encoder (see Section 4.1) and then estimate the uncertainty caused by each token qt (Equation (6)) under the assumption that 100% 90% 80% 70% 60% 50% 40% 30% Proportion of Examples 0.5 0.6 0.7 F1 Score Posterior Conf (a) IFTTT 100% 90% 80% 70% 60% 50% 40% 30% Proportion of Examples 0.5 0.6 0.7 0.8 0.9 1.0 Accuracy Posterior Conf (b) DJANGO Figure 3: Confidence scores are used as threshold to filter out uncertain test examples. As the threshold increases, performance improves. The horizontal axis shows the proportion of examples beyond the threshold. addition of noise should only affect genuinely uncertain tokens. Notice that here we inject noise to one token at a time1 instead of all parameters (see Figure 1). Tokens identified as uncertain by the above procedure are considered gold standard and compared to those identified by our method. We use Gaussian noise to perturb vectors in our experiments (dropout obtained similar results). We define an evaluation metric based on the overlap (overlap@K) among tokens identified as uncertain by the model and the gold standard. Given an example, we first compute the interpretation scores of the input tokens according to our method, and obtain a list τ1 of K tokens with highest scores. We also obtain a list τ2 of K tokens with highest ground-truth scores and measure the degree of overlap between these two lists: overlap@K = |τ1 ∩τ2| K 1Noise injection as described above is used for evaluation purposes only since we need to perform forward passes multiple times (see Section 4.1) for each token, and the running time increases linearly with the input length. 751 Method IFTTT DJANGO @2 @4 @2 @4 ATTENTION 0.525 0.737 0.637 0.684 BACKPROP 0.608 0.791 0.770 0.788 Table 6: Uncertainty interpretation against inferred ground truth; we compute the overlap between tokens identified as contributing to uncertainty by our method and those found in the gold standard. Overlap is shown for top 2 and 4 tokens. Best results are in bold. google calendar−any event starts THEN facebook −create a status message−(status message ({description})) ATT post calendar event to facebook BP post calendar event to facebook feed−new feed item−(feed url( url sports.espn.go.com)) THEN ... ATT espn mlb headline to readability BP espn mlb headline to readability weather−tomorrow’s low drops below−(( temperature(0)) (degrees in(c))) THEN ... ATT warn me when it’s going to be freezing tomorrow BP warn me when it’s going to be freezing tomorrow if str number[0] == ’ STR ’: ATT if first element of str number equals a string STR . BP if first element of str number equals a string STR . start = 0 ATT start is an integer 0 . BP start is an integer 0 . if name.startswith(’ STR ’): ATT if name starts with an string STR , BP if name starts with an string STR , Table 7: Uncertainty interpretation for ATTENTION (ATT) and BACKPROP (BP) . The first line in each group is the model prediction. Predicted tokens and input words with large scores are shown in red and blue, respectively. where K ∈{2, 4} in our experiments. For example, the overlap@4 metric of the lists τ1 = [q7, q8, q2, q3] and τ2 = [q7, q8, q3, q4] is 3/4, because there are three overlapping tokens. Table 6 reports results with overlap@2 and overlap@4. Overall, BACKPROP achieves better interpretation quality than the attention mechanism. On both datasets, about 80% of the top-4 tokens identified as uncertain agree with the ground truth. Table 7 shows examples where our method has identified input tokens contributing to the uncertainty of the output. We highlight token at if its uncertainty score uat is greater than 0.5 ∗avg{uat′}|a| t′=1. The results illustrate that the parser tends to be uncertain about tokens which are function arguments (e.g., URLs, and message content), and ambiguous inputs. The examples show that BACKPROP is qualitatively better compared to ATTENTION; attention scores often produce inaccurate alignments while BACKPROP can utilize information flowing through the LSTMs rather than only relying on the attention mechanism. 7 Conclusions In this paper we presented a confidence estimation model and an uncertainty interpretation method for neural semantic parsing. Experimental results show that our method achieves better performance than competitive baselines on two datasets. Directions for future work are many and varied. The proposed framework could be applied to a variety of tasks (Bahdanau et al., 2015; Schmaltz et al., 2017) employing sequence-to-sequence architectures. We could also utilize the confidence estimation model within an active learning framework for neural semantic parsing. Acknowledgments We would like to thank Pengcheng Yin for sharing with us the preprocessed version of the DJANGO dataset. We gratefully acknowledge the financial support of the European Research Council (award number 681760; Dong, Lapata) and the AdeptMind Scholar Fellowship program (Dong). References Jacob Andreas, Andreas Vlachos, and Stephen Clark. 2013. Semantic parsing as machine translation. In Proceedings of the 51st Annual Meeting of the Association for Computational Linguistics, pages 47–52, Sofia, Bulgaria. Sebastian Bach, Alexander Binder, Grgoire Montavon, Frederick Klauschen, Klaus-Robert Mller, and Wojciech Samek. 2015. On pixel-wise explanations for non-linear classifier decisions by layer-wise relevance propagation. PLOS ONE, 10(7):1–46. Dzmitry Bahdanau, Kyunghyun Cho, and Yoshua Bengio. 2015. Neural machine translation by jointly learning to align and translate. In Proceedings of the 3rd International Conference on Learning Representations, San Diego, California. Steven Bird, Ewan Klein, and Edward Loper. 2009. Natural Language Processing with Python. O’Reilly Media. John Blatz, Erin Fitzgerald, George Foster, Simona Gandrabur, Cyril Goutte, Alex Kulesza, Alberto 752 Sanchis, and Nicola Ueffing. 2004. Confidence estimation for machine translation. In Proceedings of the 20th International Conference on Computational Linguistics, pages 315–321, Geneva, Switzerland. Charles Blundell, Julien Cornebise, Koray Kavukcuoglu, and Daan Wierstra. 2015. Weight uncertainty in neural networks. In Proceedings of the 32nd International Conference on International Conference on Machine Learning, pages 1613–1622, Lille, France. Tianqi Chen and Carlos Guestrin. 2016. XGBoost: A scalable tree boosting system. In Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, pages 785–794, San Francisco, California. John S Denker and Yann Lecun. 1991. Transforming neural-net output levels to probability distributions. In Advances in neural information processing systems, pages 853–859, Denver, Colorado. Li Dong and Mirella Lapata. 2016. Language to logical form with neural attention. In Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics, pages 33–43, Berlin, Germany. Li Dong and Mirella Lapata. 2018. Coarse-to-fine decoding for neural semantic parsing. In Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics, Melbourne, Australia. Bradley Efron and Robert J Tibshirani. 1994. An Introduction to the Bootstrap. CRC press. Xing Fan, Emilio Monti, Lambert Mathias, and Markus Dreyer. 2017. Transfer learning for neural semantic parsing. In Proceedings of the 2nd Workshop on Representation Learning for NLP, pages 48–56, Vancouver, Canada. Yarin Gal and Zoubin Ghahramani. 2016. Dropout as a bayesian approximation: Representing model uncertainty in deep learning. In Proceedings of the 33rd International Conference on Machine Learning, pages 1050–1059, New York City, NY. Zhe Gan, Chunyuan Li, Changyou Chen, Yunchen Pu, Qinliang Su, and Lawrence Carin. 2017. Scalable bayesian learning of recurrent neural networks for language modeling. In Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics, pages 321–331, Vancouver, Canada. D. C. Gondek, A. Lally, A. Kalyanpur, J. W. Murdock, P. A. Duboue, L. Zhang, Y. Pan, Z. M. Qiu, and C. Welty. 2012. A framework for merging and ranking of answers in DeepQA. IBM Journal of Research and Development, 56(3.4):14:1–14:12. Kenneth Heafield, Ivan Pouzyrevsky, Jonathan H. Clark, and Philipp Koehn. 2013. Scalable modified Kneser-Ney language model estimation. In Proceedings of the 51st Annual Meeting of the Association for Computational Linguistics, pages 690–696, Sofia, Bulgaria. Jonathan Herzig and Jonathan Berant. 2017. Neural semantic parsing over multiple knowledge-bases. In Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics, pages 623– 628, Vancouver, Canada. Sepp Hochreiter and J¨urgen Schmidhuber. 1997. Long short-term memory. Neural Computation, 9:1735– 1780. Srinivasan Iyer, Ioannis Konstas, Alvin Cheung, Jayant Krishnamurthy, and Luke Zettlemoyer. 2017. Learning a neural semantic parser from user feedback. In Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics, pages 963–973, Vancouver, Canada. Robin Jia and Percy Liang. 2016. Data recombination for neural semantic parsing. In Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics, pages 12–22, Berlin, Germany. Alexander Johansen and Richard Socher. 2017. Learning when to skim and when to read. In Proceedings of the 2nd Workshop on Representation Learning for NLP, pages 257–264, Vancouver, Canada. Tom´aˇs Koˇcisk´y, G´abor Melis, Edward Grefenstette, Chris Dyer, Wang Ling, Phil Blunsom, and Karl Moritz Hermann. 2016. Semantic parsing with semi-supervised sequential autoencoders. In Proceedings of the 2016 Conference on Empirical Methods in Natural Language Processing, pages 1078– 1087, Austin, Texas. Jayant Krishnamurthy, Pradeep Dasigi, and Matt Gardner. 2017. Neural semantic parsing with type constraints for semi-structured tables. In Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing, pages 1517–1527, Copenhagen, Denmark. Tom Kwiatkowski, Luke Zettlemoyer, Sharon Goldwater, and Mark Steedman. 2011. Lexical generalization in CCG grammar induction for semantic parsing. In Proceedings of the 2011 Conference on Empirical Methods in Natural Language Processing, pages 1512–1523, Edinburgh, Scotland. Wang Ling, Phil Blunsom, Edward Grefenstette, Karl Moritz Hermann, Tom´aˇs Koˇcisk´y, Fumin Wang, and Andrew Senior. 2016. Latent predictor networks for code generation. In Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics, pages 599–609, Berlin, Germany. Wei Lu, Hwee Tou Ng, Wee Sun Lee, and Luke Zettlemoyer. 2008. A generative model for parsing natural language to meaning representations. In Proceedings of the 2008 Conference on Empirical Methods in Natural Language Processing, pages 783–792, Honolulu, Hawaii. 753 Thang Luong, Hieu Pham, and Christopher D. Manning. 2015a. Effective approaches to attentionbased neural machine translation. In Proceedings of the 2015 Conference on Empirical Methods in Natural Language Processing, pages 1412–1421, Lisbon, Portugal. Thang Luong, Ilya Sutskever, Quoc Le, Oriol Vinyals, and Wojciech Zaremba. 2015b. Addressing the rare word problem in neural machine translation. In Proceedings of the 53rd Annual Meeting of the Association for Computational Linguistics and the 7th International Joint Conference on Natural Language Processing, pages 11–19, Beijing, China. David J. C. MacKay. 1992. A practical bayesian framework for backpropagation networks. Neural Computation, 4(3):448–472. Radford M Neal. 1996. Bayesian learning for neural networks, volume 118. Springer Science & Business Media. Yusuke Oda, Hiroyuki Fudaba, Graham Neubig, Hideaki Hata, Sakriani Sakti, Tomoki Toda, and Satoshi Nakamura. 2015. Learning to generate pseudo-code from source code using statistical machine translation. In Proceedings of the 2015 30th IEEE/ACM International Conference on Automated Software Engineering, pages 574–584, Washington, DC. Chris Quirk, Raymond Mooney, and Michel Galley. 2015. Language to code: Learning semantic parsers for if-this-then-that recipes. In Proceedings of the 53rd Annual Meeting of the Association for Computational Linguistics and the 7th International Joint Conference on Natural Language Processing, pages 878–888, Beijing, China. Maxim Rabinovich, Mitchell Stern, and Dan Klein. 2017. Abstract syntax networks for code generation and semantic parsing. In Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics, pages 1139–1149, Vancouver, Canada. Allen Schmaltz, Yoon Kim, Alexander Rush, and Stuart Shieber. 2017. Adapting sequence models for sentence correction. In Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing, pages 2797–2803, Copenhagen, Denmark. Radu Soricut and Abdessamad Echihabi. 2010. Trustrank: Inducing trust in automatic translations via ranking. In Proceedings of the 48th Annual Meeting of the Association for Computational Linguistics, pages 612–621, Uppsala, Sweden. Nitish Srivastava, Geoffrey Hinton, Alex Krizhevsky, Ilya Sutskever, and Ruslan Salakhutdinov. 2014. Dropout: A simple way to prevent neural networks from overfitting. Journal of Machine Learning Research, 15:1929–1958. Raymond Hendy Susanto and Wei Lu. 2017. Neural architectures for multilingual semantic parsing. In Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics, pages 38–44, Vancouver, Canada. Ilya Sutskever, Oriol Vinyals, and Quoc V Le. 2014. Sequence to sequence learning with neural networks. In Advances in Neural Information Processing Systems, pages 3104–3112, Montreal, Canada. Lappoon R. Tang and Raymond J. Mooney. 2000. Automated construction of database interfaces: Intergrating statistical and relational learning for semantic parsing. In 2000 Joint SIGDAT Conference on Empirical Methods in Natural Language Processing and Very Large Corpora, pages 133–141, Hong Kong, China. T. Tieleman and G. Hinton. 2012. Lecture 6.5— RMSProp: Divide the gradient by a running average of its recent magnitude. Technical report. Nicola Ueffing and Hermann Ney. 2005. Word-level confidence estimation for machine translation using phrase-based translation models. In Proceedings of the Conference on Human Language Technology and Empirical Methods in Natural Language Processing, pages 763–770, Vancouver, Canada. Chunyang Xiao, Marc Dymetman, and Claire Gardent. 2016. Sequence-based structured prediction for semantic parsing. In Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics, pages 1341–1350, Berlin, Germany. Pengcheng Yin and Graham Neubig. 2017. A syntactic neural model for general-purpose code generation. In Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics, pages 440– 450, Vancouver, Canada. Luke Zettlemoyer and Michael Collins. 2007. Online learning of relaxed CCG grammars for parsing to logical form. In Proceedings of the 2007 Joint Conference on Empirical Methods in Natural Language Processing and Computational Natural Language Learning, pages 678–687, Prague, Czech Republic. Jianming Zhang, Zhe Lin, Jonathan Brandt, Xiaohui Shen, and Stan Sclaroff. 2016. Top-down neural attention by excitation backprop. In European Conference on Computer Vision, pages 543–559, Amsterdam, Netherlands. Kai Zhao and Liang Huang. 2015. Type-driven incremental semantic parsing with polymorphism. In Proceedings of the 2015 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 1416–1421, Denver, Colorado.
2018
69
Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Long Papers), pages 66–75 Melbourne, Australia, July 15 - 20, 2018. c⃝2018 Association for Computational Linguistics 66 Subword Regularization: Improving Neural Network Translation Models with Multiple Subword Candidates Taku Kudo Google, Inc. [email protected] Abstract Subword units are an effective way to alleviate the open vocabulary problems in neural machine translation (NMT). While sentences are usually converted into unique subword sequences, subword segmentation is potentially ambiguous and multiple segmentations are possible even with the same vocabulary. The question addressed in this paper is whether it is possible to harness the segmentation ambiguity as a noise to improve the robustness of NMT. We present a simple regularization method, subword regularization, which trains the model with multiple subword segmentations probabilistically sampled during training. In addition, for better subword sampling, we propose a new subword segmentation algorithm based on a unigram language model. We experiment with multiple corpora and report consistent improvements especially on low resource and out-of-domain settings. 1 Introduction Neural Machine Translation (NMT) models (Bahdanau et al., 2014; Luong et al., 2015; Wu et al., 2016; Vaswani et al., 2017) often operate with fixed word vocabularies, as their training and inference depend heavily on the vocabulary size. However, limiting vocabulary size increases the amount of unknown words, which makes the translation inaccurate especially in an open vocabulary setting. A common approach for dealing with the open vocabulary issue is to break up rare words into subword units (Schuster and Nakajima, 2012; Chitnis and DeNero, 2015; Sennrich et al., 2016; Wu et al., 2016). Byte-Pair-Encoding Subwords ( means spaces) Vocabulary id sequence Hell/o/ world 13586 137 255 H/ello/ world 320 7363 255 He/llo/ world 579 10115 255 /He/l/l/o/ world 7 18085 356 356 137 255 H/el/l/o/ /world 320 585 356 137 7 12295 Table 1: Multiple subword sequences encoding the same sentence “Hello World” (BPE) (Sennrich et al., 2016) is a de facto standard subword segmentation algorithm applied to many NMT systems and achieving top translation quality in several shared tasks (Denkowski and Neubig, 2017; Nakazawa et al., 2017). BPE segmentation gives a good balance between the vocabulary size and the decoding efficiency, and also sidesteps the need for a special treatment of unknown words. BPE encodes a sentence into a unique subword sequence. However, a sentence can be represented in multiple subword sequences even with the same vocabulary. Table 1 illustrates an example. While these sequences encode the same input “Hello World”, NMT handles them as completely different inputs. This observation becomes more apparent when converting subword sequences into id sequences (right column in Table 1). These variants can be viewed as a spurious ambiguity, which might not always be resolved in decoding process. At training time of NMT, multiple segmentation candidates will make the model robust to noise and segmentation errors, as they can indirectly help the model to learn the compositionality of words, e.g., “books” can be decomposed into “book” + “s”. In this study, we propose a new regularization method for open-vocabulary NMT, called subword regularization, which employs multiple subword segmentations to make the NMT model accurate and robust. Subword regularization consists of the following two sub-contributions: 67 • We propose a simple NMT training algorithm to integrate multiple segmentation candidates. Our approach is implemented as an on-the-fly data sampling, which is not specific to NMT architecture. Subword regularization can be applied to any NMT system without changing the model structure. • We also propose a new subword segmentation algorithm based on a language model, which provides multiple segmentations with probabilities. The language model allows to emulate the noise generated during the segmentation of actual data. Empirical experiments using multiple corpora with different sizes and languages show that subword regularization achieves significant improvements over the method using a single subword sequence. In addition, through experiments with out-of-domain corpora, we show that subword regularization improves the robustness of the NMT model. 2 Neural Machine Translation with multiple subword segmentations 2.1 NMT training with on-the-fly subword sampling Given a source sentence X and a target sentence Y , let x = (x1, . . . , xM) and y = (y1, . . . , yN) be the corresponding subword sequences segmented with an underlying subword segmenter, e.g., BPE. NMT models the translation probability P(Y |X) = P(y|x) as a target language sequence model that generates target subword yn conditioning on the target history y<n and source input sequence x: P(y|x; θ) = N ∏ n=1 P(yn|x, y<n; θ), (1) where θ is a set of model parameters. A common choice to predict the subword yn is to use a recurrent neural network (RNN) architecture. However, note that subword regularization is not specific to this architecture and can be applicable to other NMT architectures without RNN, e.g., (Vaswani et al., 2017; Gehring et al., 2017). NMT is trained using the standard maximum likelihood estimation, i.e., maximizing the loglikelihood L(θ) of a given parallel corpus D = {⟨X(s), Y (s)⟩}|D| s=1 = {⟨x(s), y(s)⟩}|D| s=1, θMLE = arg max θ L(θ) where, L(θ) = |D| ∑ s=1 log P(y(s)|x(s); θ).(2) We here assume that the source and target sentences X and Y can be segmented into multiple subword sequences with the segmentation probabilities P(x|X) and P(y|Y ) respectively. In subword regularization, we optimize the parameter set θ with the marginalized likelihood as (3). Lmarginal(θ) = |D| ∑ s=1 Ex∼P(x|X(s)) y∼P(y|Y (s)) [log P(y|x; θ)] (3) Exact optimization of (3) is not feasible as the number of possible segmentations increases exponentially with respect to the sentence length. We approximate (3) with finite k sequences sampled from P(x|X) and P(y|Y ) respectively. Lmarginal(θ) ∼= 1 k2 |D| ∑ s=1 k ∑ i=1 k ∑ j=1 log P(yj|xi; θ) xi ∼P(x|X(s)), yj ∼P(y|Y (s)). (4) For the sake of simplicity, we use k = 1. Training of NMT usually uses an online training for efficiency, in which the parameter θ is iteratively optimized with respect to the smaller subset of D (mini-batch). When we have a sufficient number of iterations, subword sampling is executed via the data sampling of online training, which yields a good approximation of (3) even if k = 1. It should be noted, however, that the subword sequence is sampled on-the-fly for each parameter update. 2.2 Decoding In the decoding of NMT, we only have a raw source sentence X. A straightforward approach for decoding is to translate from the best segmentation x∗that maximizes the probability P(x|X), i.e., x∗ = argmaxxP(x|X). Additionally, we can use the n-best segmentations of P(x|X) to incorporate multiple segmentation candidates. More specifically, given n-best segmentations (x1, . . . , xn), we choose the best translation y∗ that maximizes the following score. score(x, y) = log P(y|x)/|y|λ, (5) 68 where |y| is the number of subwords in y and λ ∈ R+ is the parameter to penalize shorter sentences. λ is optimized with the development data. In this paper, we call these two algorithms onebest decoding and n-best decoding respectively. 3 Subword segmentations with language model 3.1 Byte-Pair-Encoding (BPE) Byte-Pair-Encoding (BPE) (Sennrich et al., 2016; Schuster and Nakajima, 2012) is a subword segmentation algorithm widely used in many NMT systems1. BPE first splits the whole sentence into individual characters. The most frequent2 adjacent pairs of characters are then consecutively merged until reaching a desired vocabulary size. Subword segmentation is performed by applying the same merge operations to the test sentence. An advantage of BPE segmentation is that it can effectively balance the vocabulary size and the step size (the number of tokens required to encode the sentence). BPE trains the merged operations only with a frequency of characters. Frequent substrings will be joined early, resulting in common words remaining as one unique symbol. Words consisting of rare character combinations will be split into smaller units, e.g., substrings or characters. Therefore, only with a small fixed size of vocabulary (usually 16k to 32k), the number of required symbols to encode a sentence will not significantly increase, which is an important feature for an efficient decoding. One downside is, however, that BPE is based on a greedy and deterministic symbol replacement, which can not provide multiple segmentations with probabilities. It is not trivial to apply BPE to the subword regularization that depends on segmentation probabilities P(x|X). 3.2 Unigram language model In this paper, we propose a new subword segmentation algorithm based on a unigram language model, which is capable of outputing multiple subword segmentations with probabilities. The unigram language model makes an assumption that 1Strictly speaking, wordpiece model (Schuster and Nakajima, 2012) is different from BPE. We consider wordpiece as a variant of BPE, as it also uses an incremental vocabulary generation with a different loss function. 2Wordpiece model uses a likelihood instead of frequency. each subword occurs independently, and consequently, the probability of a subword sequence x = (x1, . . . , xM) is formulated as the product of the subword occurrence probabilities p(xi)3: P(x) = M ∏ i=1 p(xi), (6) ∀i xi ∈V, ∑ x∈V p(x) = 1, where V is a pre-determined vocabulary. The most probable segmentation x∗for the input sentence X is then given by x∗= arg max x∈S(X) P(x), (7) where S(X) is a set of segmentation candidates built from the input sentence X. x∗is obtained with the Viterbi algorithm (Viterbi, 1967). If the vocabulary V is given, subword occurrence probabilities p(xi) are estimated via the EM algorithm that maximizes the following marginal likelihood L assuming that p(xi) are hidden variables. L = |D| ∑ s=1 log(P(X(s))) = |D| ∑ s=1 log ( ∑ x∈S(X(s)) P(x) ) In the real setting, however, the vocabulary set V is also unknown. Because the joint optimization of vocabulary set and their occurrence probabilities is intractable, we here seek to find them with the following iterative algorithm. 1. Heuristically make a reasonably big seed vocabulary from the training corpus. 2. Repeat the following steps until |V| reaches a desired vocabulary size. (a) Fixing the set of vocabulary, optimize p(x) with the EM algorithm. (b) Compute the lossi for each subword xi, where lossi represents how likely the likelihood L is reduced when the subword xi is removed from the current vocabulary. (c) Sort the symbols by lossi and keep top η % of subwords (η is 80, for example). Note that we always keep the subwords consisting of a single character to avoid out-of-vocabulary. 3Target sequence y = (y1, . . . , yN) can also be modeled similarly. 69 There are several ways to prepare the seed vocabulary. The natural choice is to use the union of all characters and the most frequent substrings in the corpus4. Frequent substrings can be enumerated in O(T) time and O(20T) space with the Enhanced Suffix Array algorithm (Nong et al., 2009), where T is the size of the corpus. Similar to (Sennrich et al., 2016), we do not consider subwords that cross word boundaries. As the final vocabulary V contains all individual characters in the corpus, character-based segmentation is also included in the set of segmentation candidates S(X). In other words, subword segmentation with the unigram language model can be seen as a probabilsitic mixture of characters, subwords and word segmentations. 3.3 Subword sampling Subword regularization samples one subword segmentation from the distribution P(x|X) for each parameter update. A straightforward approach for an approximate sampling is to use the l-best segmentations. More specifically, we first obtain l-best segmentations according to the probability P(x|X). l-best search is performed in linear time with the Forward-DP Backward-A* algorithm (Nagata, 1994). One segmentation xi is then sampled from the multinomial distribution P(xi|X) ∼= P(xi)α/ ∑l i=1 P(xi)α, where α ∈ R+ is the hyperparameter to control the smoothness of the distribution. A smaller α leads to sample xi from a more uniform distribution. A larger α tends to select the Viterbi segmentation. Setting l →∞, in theory, allows to take all possible segmentations into account. However, it is not feasible to increase l explicitly as the number of candidates increases exponentially with respect to the sentence length. In order to exactly sample from all possible segmentations, we use the Forward-Filtering and Backward-Sampling algorithm (FFBS) (Scott, 2002), a variant of the dynamic programming originally introduced by Bayesian hidden Markov model training. In FFBS, all segmentation candidates are represented in a compact lattice structure, where each node denotes a subword. In the first pass, FFBS computes a set of forward probabilities for all subwords in the lattice, which provide the probability of ending up in any particular subword w. In the second 4It is also possible to run BPE with a sufficient number of merge operations. pass, traversing the nodes in the lattice from the end of the sentence to the beginning of the sentence, subwords are recursively sampled for each branch according to the forward probabilities. 3.4 BPE vs. Unigram language model BPE was originally introduced in the data compression literature (Gage, 1994). BPE is a variant of dictionary (substitution) encoder that incrementally finds a set of symbols such that the total number of symbols for encoding the text is minimized. On the other hand, the unigram language model is reformulated as an entropy encoder that minimizes the total code length for the text. According to Shannon’s coding theorem, the optimal code length for a symbol s is −log ps, where ps is the occurrence probability of s. This is essentially the same as the segmentation strategy of the unigram language model described as (7). BPE and the unigram language model share the same idea that they encode a text using fewer bits with a certain data compression principle (dictionary vs. entropy). Therefore, we expect to see the same benefit as BPE with the unigram language model. However, the unigram language model is more flexible as it is based on a probabilistic language model and can output multiple segmentations with their probabilities, which is an essential requirement for subword regularization. 4 Related Work Regularization by noise is a well studied technique in deep neural networks. A well-known example is dropout (Srivastava et al., 2014), which randomly turns off a subset of hidden units during training. Dropout is analyzed as an ensemble training, where many different models are trained on different subsets of the data. Subword regularization trains the model on different data inputs randomly sampled from the original input sentences, and thus is regarded as a variant of ensemble training. The idea of noise injection has previously been used in the context of Denoising Auto-Encoders (DAEs) (Vincent et al., 2008), where noise is added to the inputs and the model is trained to reconstruct the original inputs. There are a couple of studies that employ DAEs in natural language processing. (Lample et al., 2017; Artetxe et al., 2017) independently propose DAEs in the context of 70 sequence-to-sequence learning, where they randomly alter the word order of the input sentence and the model is trained to reconstruct the original sentence. Their technique is applied to an unsupervised machine translation to make the encoder truly learn the compositionality of input sentences. Word dropout (Iyyer et al., 2015) is a simple approach for a bag-of-words representation, in which the embedding of a certain word sequence is simply calculated by averaging the word embeddings. Word dropout randomly drops words from the bag before averaging word embeddings, and consequently can see 2|X| different token sequences for each input X. (Belinkov and Bisk, 2017) explore the training of character-based NMT with a synthetic noise that randomly changes the order of characters in a word. (Xie et al., 2017) also proposes a robust RNN language model that interpolates random unigram language model. The basic idea and motivation behind subword regularization are similar to those of previous work. In order to increase the robustness, they inject noise to input sentences by randomly changing the internal representation of sentences. However, these previous approaches often depend on heuristics to generate synthetic noises, which do not always reflect the real noises on training and inference. In addition, these approaches can only be applied to source sentences (encoder), as they irreversibly rewrite the surface of sentences. Subword regularization, on the other hand, generates synthetic subword sequences with an underlying language model to better emulate the noises and segmentation errors. As subword regularization is based on an invertible conversion, we can safely apply it both to source and target sentences. Subword regularization can also be viewed as a data augmentation. In subword regularization, an input sentence is converted into multiple invariant sequences, which is similar to the data augmentation for image classification tasks, for example, random flipping, distorting, or cropping. There are several studies focusing on segmentation ambiguities in language modeling. Latent Sequence Decompositions (LSDs) (Chan et al., 2016) learns the mapping from the input and the output by marginalizing over all possible segmentations. LSDs and subword regularization do not assume a predetermined segmentation for a sentence, and take multiple segmentations by a similar marginalization technique. The difference is that subword regularization injects the multiple segmentations with a separate language model through an on-the-fly subword sampling. This approach makes the model simple and independent from NMT architectures. Lattice-to-sequence models (Su et al., 2017; Sperber et al., 2017) are natural extension of sequence-to-sequence models, which represent inputs uncertainty through lattices. Lattice is encoded with a variant of TreeLSTM (Tai et al., 2015), which requires changing the model architecture. In addition, while subword regularization is applied both to source and target sentences, lattice-to-sequence models do not handle target side ambiguities. A mixed word/character model (Wu et al., 2016) addresses the out-of-vocabulary problem with a fixed vocabulary. In this model, out-ofvocabulary words are not collapsed into a single UNK symbol, but converted into the sequence of characters with special prefixes representing the positions in the word. Similar to BPE, this model also encodes a sentence into a unique fixed sequence, thus multiple segmentations are not taken into account. 5 Experiments 5.1 Setting We conducted experiments using multiple corpora with different sizes and languages. Table 2 summarizes the evaluation data we used 5 6 7 8 9 10. IWSLT15/17 and KFTT are relatively small corpora, which include a wider spectrum of languages with different linguistic properties. They can evaluate the language-agnostic property of subword regularization. ASPEC and WMT14 (en↔de) are medium-sized corpora. WMT14 (en↔cs) is a rather big corpus consisting of more than 10M parallel sentences. We used GNMT (Wu et al., 2016) as the implementation of the NMT system for all experiments. We generally followed the settings and training procedure described in (Wu et al., 2016), however, we changed the settings according to the 5IWSLT15: http://workshop2015.iwslt.org/ 6IWSLT17: http://workshop2017.iwslt.org/ 7KFTT: http://www.phontron.com/kftt/ 8ASPEC: http://lotus.kuee.kyoto-u.ac.jp/ASPEC/ 9WMT14: http://statmt.org/wmt14/ 10WMT14(en↔de) uses the same setting as (Wu et al., 2016). 71 corpus size. Table 2 shows the hyperparameters we used in each experiment. As common settings, we set the dropout probability to be 0.2. For parameter estimation, we used a combination of Adam (Kingma and Adam, 2014) and SGD algorithms. Both length normalization and converge penalty parameters are set to 0.2 (see section 7 in (Wu et al., 2016)). We set the decoding beam size to 4. The data was preprocessed with Moses tokenizer before training subword models. It should be noted, however, that Chinese and Japanese have no explicit word boundaries and Moses tokenizer does not segment sentences into words, and hence subword segmentations are trained almost from unsegmented raw sentences in these languages. We used the case sensitive BLEU score (Papineni et al., 2002) as an evaluation metric. As the output sentences are not segmented in Chinese and Japanese, we segment them with characters and KyTea11 for Chinese and Japanese respectively before calculating BLEU scores. BPE segmentation is used as a baseline system. We evaluate three test systems with different sampling strategies: (1) Unigram language model-based subword segmentation without subword regularization (l = 1), (2) with subword regularization (l = 64, α = 0.1) and (3) (l = ∞, α = 0.2/0.5) 0.2: IWSLT, 0.5: others. These sampling parameters were determined with preliminary experiments. l = 1 is aimed at a pure comparison between BPE and the unigram language model. In addition, we compare one-best decoding and n-best decoding (See section 2.2). Because BPE is not able to provide multiple segmentations, we only evaluate one-best decoding for BPE. Consequently, we compare 7 systems (1 + 3 × 2) for each language pair. 5.2 Main Results Table 3 shows the translation experiment results. First, as can be seen in the table, BPE and unigram language model without subword regularization (l = 1) show almost comparable BLEU scores. This is not surprising, given that both BPE and the unigram language model are based on data compression algorithms. We can see that subword regularization (l > 1) boosted BLEU scores quite impressively (+1 to 2 points) in all language pairs except for WMT14 11http://www.phontron.com/kytea (en→cs) dataset. The gains are larger especially in lower resource settings (IWSLT and KFTT). It can be considered that the positive effects of data augmentation with subword regularization worked better in lower resource settings, which is a common property of other regularization techniques. As for the sampling algorithm, (l = ∞α = 0.2/0.5) slightly outperforms (l = 64, α = 0.1) on IWSLT corpus, but they show almost comparable results on larger data set. Detailed analysis is described in Section 5.5. On top of the gains with subword regularization, n-best decoding yields further improvements in many language pairs. However, we should note that the subword regularization is mandatory for n-best decoding and the BLEU score is degraded in some language pairs without subword regularization (l = 1). This result indicates that the decoder is more confused for multiple segmentations when they are not explored at training time. 5.3 Results with out-of-domain corpus To see the effect of subword regularization on a more open-domain setting, we evaluate the systems with out-of-domain in-house data consisting of multiple genres: Web, patents and query logs. Note that we did not conduct the comparison with KFTT and ASPEC corpora, as we found that the domains of these corpora are too specific12, and preliminary evaluations showed extremely poor BLEU scores (less than 5) on out-of-domain corpora. Table 4 shows the results. Compared to the gains obtained with the standard in-domain evaluations in Table 3, subword regularization achieves significantly larger improvements (+2 points) in every domain of corpus. An interesting observation is that we have the same level of improvements even on large training data sets (WMT14), which showed marginal or small gains with the in-domain data. This result strongly supports our claim that subword regularization is more useful for open-domain settings. 5.4 Comparison with other segmentation algorithms Table 5 shows the comparison on different segmentation algorithms: word, character, mixed word/character (Wu et al., 2016), BPE 12KFTT focuses on Wikipedia articles related to Kyoto, and ASPEC is a corpus of scientific paper domain. Therefore, it is hard to translate out-of-domain texts. 72 Size of sentences Parameters Corpus Language pair train dev test #vocab (Enc/Dec shared) #dim of LSTM, embedding #layers of LSTM (Enc+Dec) IWSLT15 en ↔vi 133k 1553 1268 16k 512 2+2 en ↔zh 209k 887 1261 16k 512 2+2 IWSLT17 en ↔fr 232k 890 1210 16k 512 2+2 en ↔ar 231k 888 1205 16k 512 2+2 KFTT en ↔ja 440k 1166 1160 8k 512 6+6 ASPEC en ↔ja 2M 1790 1812 16k 512 6+6 WMT14 en ↔de 4.5M 3000 3003 32k 1024 8+8 en ↔cs 15M 3000 3003 32k 1024 8+8 Table 2: Details of evaluation data set Proposed (one-best decoding) Proposed (n-best decoding, n=64) Corpus Language pair baseline (BPE) l = 1 l = 64 α = 0.1 l = ∞ α=0.2/0.5 l = 1 l = 64 α = 0.1 l = ∞ α=0.2/0.5 IWSLT15 en →vi 25.61 25.49 27.68* 27.71* 25.33 28.18* 28.48* vi →en 22.48 22.32 24.73* 26.15* 22.04 24.66* 26.31* en →zh 16.70 16.90 19.36* 20.33* 16.73 20.14* 21.30* zh →en 15.76 15.88 17.79* 16.95* 16.23 17.75* 17.29* IWSLT17 en →fr 35.53 35.39 36.70* 36.36* 35.16 37.60* 37.01* fr →en 33.81 33.74 35.57* 35.54* 33.69 36.07* 36.06* en →ar 13.01 13.04 14.92* 15.55* 12.29 14.90* 15.36* ar →en 25.98 27.09* 28.47* 29.22* 27.08* 29.05* 29.29* KFTT en →ja 27.85 28.92* 30.37* 30.01* 28.55* 31.46* 31.43* ja →en 21.37 21.46 22.33* 22.04* 21.37 22.47* 22.64* ASPEC en →ja 40.62 40.66 41.24* 41.23* 40.86 41.55* 41.87* ja →en 26.51 26.76 27.08* 27.14* 27.49* 27.75* 27.89* WMT14 en →de 24.53 24.50 25.04* 24.74 22.73 25.00* 24.57 de →en 28.01 28.65* 28.83* 29.39* 28.24 29.13* 29.97* en →cs 25.25 25.54 25.41 25.26 24.88 25.49 25.38 cs →en 28.78 28.84 29.64* 29.41* 25.77 29.23* 29.15* Table 3: Main Results (BLEU(%)) (l: sampling size in SR, α: smoothing parameter). * indicates statistically significant difference (p < 0.05) from baselines with bootstrap resampling (Koehn, 2004). The same mark is used in Table 4 and 6. (Sennrich et al., 2016) and our unigram model with or without subword regularization. The BLEU scores of word, character and mixed word/character models are cited from (Wu et al., 2016). As German is a morphologically rich language and needs a huge vocabulary for word models, subword-based algorithms perform a gain of more than 1 BLEU point than word model. Among subword-based algorithms, the unigram language model with subword regularization achieved the best BLEU score (25.04), which demonstrates the effectiveness of multiple subword segmentations. 5.5 Impact of sampling hyperparameters Subword regularization has two hyperparameters: l: size of sampling candidates, α: smoothing constant. Figure 1 shows the BLEU scores of various hyperparameters on IWSLT15 (en →vi) dataset. First, we can find that the peaks of BLEU scores against smoothing parameter α are different depending on the sampling size l. This is expected, because l = ∞has larger search space than l = 64, and needs to set α larger to sample sequences close to the Viterbi sequence x∗. Another interesting observation is that α = 0.0 leads to performance drops especially on l = ∞. When α = 0.0, the segmentation probability P(x|X) is virtually ignored and one segmentation is uniformly sampled. This result suggests that biased sampling with a language model is helpful to emulate the real noise in the actual translation. In general, larger l allows a more aggressive regularization and is more effective for low resource settings such as IWSLT. However, the estimation of α is more sensitive and performance becomes even worse than baseline when α is extremely small. To weaken the effect of regularization and avoid selecting invalid parameters, it might be more reasonable to use l = 64 for high resource languages. 73 Domain (size) Corpus Language pair Baseline (BPE) Proposed (SR) Web IWSLT15 en →vi 13.86 17.36* (5k) vi →en 7.83 11.69* en →zh 9.71 13.85* zh →en 5.93 8.13* IWSLT17 en →fr 16.09 20.04* fr →en 14.77 19.99* WMT14 en →de 22.71 26.02* de →en 26.42 29.63* en →cs 19.53 21.41* cs →en 25.94 27.86* Patent WMT14 en →de 15.63 25.76* (2k) de →en 22.74 32.66* en →cs 16.70 19.38* cs →en 23.20 25.30* Query IWSLT15 en →zh 9.30 12.47* (2k) zh →en 14.94 19.99* IWSLT17 en →fr 10.79 10.99 fr →en 19.01 23.96* WMT14 en →de 25.93 29.82* de →en 26.24 30.90* Table 4: Results with out-of-domain corpus (l = ∞, α = 0.2: IWSLT15/17, l = 64, α = 0.1: others, one-best decding) Model BLEU Word 23.12 Character (512 nodes) 22.62 Mixed Word/Character 24.17 BPE 24.53 Unigram w/o SR (l = 1) 24.50 Unigram w/ SR (l = 64, α = 0.1) 25.04 Table 5: Comparison of different segmentation algorithms (WMT14 en→de) Although we can see in general that the optimal hyperparameters are roughly predicted with the held-out estimation, it is still an open question how to choose the optimal size l in subword sampling. 5.6 Results with single side regularization Table 6 summarizes the BLEU scores with subword regularization either on source or target sentence to figure out which components (encoder or decoder) are more affected. As expected, we can see that the BLEU scores with single side regularization are worse than full regularization. However, it should be noted that single side regularization still has positive effects. This result implies that subword regularization is not only helpful for encoder-decoder architectures, but applicable to other NLP tasks that only use an either encoder or decoder, including text classification 16 18 20 22 24 26 28 0 0.2 0.4 0.6 0.8 1 BLEU (%) Hyperparameter  l = 64(dev) l = 64(test) l = (dev) l = (test) baseline(dev) baseline(test) Figure 1: Effect of sampling hyperparameters Regularization type en→vi vi→en en→ar ar→en No reg. (baseline) 25.49 22.32 13.04 27.09 Source only 26.00 23.09* 13.46 28.16* Target only 26.10 23.62* 14.34* 27.89* Source and target 27.68* 24.73* 14.92* 28.47* Table 6: Comparison on different regularization strategies (IWSLT15/17, l = 64, α = 0.1) (Iyyer et al., 2015) and image caption generation (Vinyals et al., 2015). 6 Conclusions In this paper, we presented a simple regularization method, subword regularization13, for NMT, with no change to the network architecture. The central idea is to virtually augment training data with on-the-fly subword sampling, which helps to improve the accuracy as well as robustness of NMT models. In addition, for better subword sampling, we propose a new subword segmentation algorithm based on the unigram language model. Experiments on multiple corpora with different sizes and languages show that subword regularization leads to significant improvements especially on low resource and open-domain settings. Promising avenues for future work are to apply subword regularization to other NLP tasks based on encoder-decoder architectures, e.g., dialog generation (Vinyals and Le, 2015) and automatic summarization (Rush et al., 2015). Compared to machine translation, these tasks do not have enough training data, and thus there could be a large room for improvement with subword regularization. Additionally, we would like to explore the application of subword regularization for machine learning, including Denoising Auto Encoder (Vincent et al., 2008) and Adversarial Training (Goodfellow et al., 2015). 13Implementation is available at https://github.com/google/sentencepiece 74 References Mikel Artetxe, Gorka Labaka, Eneko Agirre, and Kyunghyun Cho. 2017. Unsupervised neural machine translation. arXive preprint arXiv:1710.11041 . Dzmitry Bahdanau, Kyunghyun Cho, and Yoshua Bengio. 2014. Neural machine translation by jointly learning to align and translate. arXiv preprint arXiv:1409.0473 . Yonatan Belinkov and Yonatan Bisk. 2017. Synthetic and natural noise both break neural machine translation. arXive preprint arXiv:1711.02173 . William Chan, Yu Zhang, Quoc Le, and Navdeep Jaitly. 2016. Latent sequence decompositions. arXiv preprint arXiv:1610.03035 . Rohan Chitnis and John DeNero. 2015. Variablelength word encodings for neural translation models. In Proc. of EMNLP. pages 2088–2093. Michael Denkowski and Graham Neubig. 2017. Stronger baselines for trustable results in neural machine translation. Proc. of Workshop on Neural Machine Translation . Philip Gage. 1994. A new algorithm for data compression. C Users J. 12(2):23–38. Jonas Gehring, Michael Auli, David Grangier, Denis Yarats, and Yann N Dauphin. 2017. Convolutional sequence to sequence learning. arXiv preprint arXiv:1705.03122 . Ian Goodfellow, Jonathon Shlens, and Christian Szegedy. 2015. Explaining and harnessing adversarial examples. In Proc. of ICLR. Mohit Iyyer, Varun Manjunatha, Jordan Boyd-Graber, and Hal Daum´e III. 2015. Deep unordered composition rivals syntactic methods for text classification. In Proc. of ACL. Diederik P Kingma and Jimmy Ba Adam. 2014. A method for stochastic optimization. arXiv preprint arXiv:1412.6980 . Philipp Koehn. 2004. Statistical significance tests for machine translation evaluation. In Proc. of EMNLP. Guillaume Lample, Ludovic Denoyer, and Marc’Aurelio Ranzato. 2017. Unsupervised machine translation using monolingual corpora only. arXive preprint arXiv:1711.00043 . Minh-Thang Luong, Hieu Pham, and Christopher D Manning. 2015. Effective approaches to attentionbased neural machine translation. In Proc of EMNLP. Masaaki Nagata. 1994. A stochastic japanese morphological analyzer using a forward-dp backward-a* nbest search algorithm. In Proc. of COLING. Toshiaki Nakazawa, Shohei Higashiyama, Chenchen Ding, Hideya Mino, Isao Goto, Hideto Kazawa, Yusuke Oda, Graham Neubig, and Sadao Kurohashi. 2017. Overview of the 4th workshop on asian translation. In Proceedings of the 4th Workshop on Asian Translation (WAT2017). pages 1–54. Ge Nong, Sen Zhang, and Wai Hong Chan. 2009. Linear suffix array construction by almost pure inducedsorting. In Proc. of DCC. Kishore Papineni, Salim Roukos, Todd Ward, and WeiJing Zhu. 2002. Bleu: a method for automatic evaluation of machine translation. In Proc. of ACL. Alexander M Rush, Sumit Chopra, and Jason Weston. 2015. A neural attention model for abstractive sentence summarization. In Proc. of EMNLP. Mike Schuster and Kaisuke Nakajima. 2012. Japanese and korean voice search. In Proc. of ICASSP. Steven L Scott. 2002. Bayesian methods for hidden markov models: Recursive computing in the 21st century. Journal of the American Statistical Association . Rico Sennrich, Barry Haddow, and Alexandra Birch. 2016. Neural machine translation of rare words with subword units. In Proc. of ACL. Matthias Sperber, Graham Neubig, Jan Niehues, and Alex Waibel. 2017. Neural lattice-to-sequence models for uncertain inputs. In Proc. of EMNLP. Nitish Srivastava, Geoffrey E Hinton, Alex Krizhevsky, Ilya Sutskever, and Ruslan Salakhutdinov. 2014. Dropout: a simple way to prevent neural networks from overfitting. JMLR 15(1). Jinsong Su, Zhixing Tan, De yi Xiong, Rongrong Ji, Xiaodong Shi, and Yang Liu. 2017. Lattice-based recurrent neural network encoders for neural machine translation. In AAAI. pages 3302–3308. Kai Sheng Tai, Richard Socher, and Christopher D Manning. 2015. Improved semantic representations from tree-structured long short-term memory networks. Proc. of ACL . Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N. Gomez, Lukasz Kaiser, and Illia Polosukhin. 2017. Attention is all you need. arXive preprint arXiv:1706.03762 . Pascal Vincent, Hugo Larochelle, Yoshua Bengio, and Pierre-Antoine Manzagol. 2008. Extracting and composing robust features with denoising autoencoders. In Proc. of ICML. Oriol Vinyals and Quoc V. Le. 2015. A neural conversational model. In ICML Deep Learning Workshop. Oriol Vinyals, Alexander Toshev, Samy Bengio, and Dumitru Erhan. 2015. Show and tell: A neural image caption generator. In Computer Vision and Pattern Recognition. 75 Andrew Viterbi. 1967. Error bounds for convolutional codes and an asymptotically optimum decoding algorithm. IEEE transactions on Information Theory 13(2):260–269. Yonghui Wu, Mike Schuster, et al. 2016. Google’s neural machine translation system: Bridging the gap between human and machine translation. arXiv preprint arXiv:1609.08144 . Ziang Xie, Sida I. Wang, Jiwei Li, Daniel L´evy, Aiming Nie, Dan Jurafsky, and Andrew Y. Ng. 2017. Data noising as smoothing in neural network language models. In Proc. of ICLR.
2018
7
Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Long Papers), pages 754–765 Melbourne, Australia, July 15 - 20, 2018. c⃝2018 Association for Computational Linguistics 754 STRUCTVAE: Tree-structured Latent Variable Models for Semi-supervised Semantic Parsing Pengcheng Yin, Chunting Zhou, Junxian He, Graham Neubig Language Technologies Institute Carnegie Mellon University {pcyin,ctzhou,junxianh,gneubig}@cs.cmu.edu Abstract Semantic parsing is the task of transducing natural language (NL) utterances into formal meaning representations (MRs), commonly represented as tree structures. Annotating NL utterances with their corresponding MRs is expensive and timeconsuming, and thus the limited availability of labeled data often becomes the bottleneck of data-driven, supervised models. We introduce STRUCTVAE, a variational auto-encoding model for semisupervised semantic parsing, which learns both from limited amounts of parallel data, and readily-available unlabeled NL utterances. STRUCTVAE models latent MRs not observed in the unlabeled data as treestructured latent variables. Experiments on semantic parsing on the ATIS domain and Python code generation show that with extra unlabeled data, STRUCTVAE outperforms strong supervised models.1 1 Introduction Semantic parsing tackles the task of mapping natural language (NL) utterances into structured formal meaning representations (MRs). This includes parsing to general-purpose logical forms such as λ-calculus (Zettlemoyer and Collins, 2005, 2007) and the abstract meaning representation (AMR, Banarescu et al. (2013); Misra and Artzi (2016)), as well as parsing to computerexecutable programs to solve problems such as question answering (Berant et al., 2013; Yih et al., 2015; Liang et al., 2017), or generation of domainspecific (e.g., SQL) or general purpose programming languages (e.g., Python) (Quirk et al., 2015; Yin and Neubig, 2017; Rabinovich et al., 2017). 1Code available at http://pcyin.me/struct vae Structured Latent Semantic Space (MRs) p(z) Inference Model qφ(z|x) Reconstruction Model p✓(x|z) Sort my_list in descending order z Figure 1: Graphical Representation of STRUCTVAE While these models have a long history (Zelle and Mooney, 1996; Tang and Mooney, 2001), recent advances are largely attributed to the success of neural network models (Xiao et al., 2016; Ling et al., 2016; Dong and Lapata, 2016; Iyer et al., 2017; Zhong et al., 2017). However, these models are also extremely data hungry: optimization of such models requires large amounts of training data of parallel NL utterances and manually annotated MRs, the creation of which can be expensive, cumbersome, and time-consuming. Therefore, the limited availability of parallel data has become the bottleneck of existing, purely supervised-based models. These data requirements can be alleviated with weakly-supervised learning, where the denotations (e.g., answers in question answering) of MRs (e.g., logical form queries) are used as indirect supervision (Clarke et al. (2010); Liang et al. (2011); Berant et al. (2013), inter alia), or dataaugmentation techniques that automatically generate pseudo-parallel corpora using hand-crafted or induced grammars (Jia and Liang, 2016; Wang et al., 2015). In this work, we focus on semi-supervised learning, aiming to learn from both limited 755 amounts of parallel NL-MR corpora, and unlabeled but readily-available NL utterances. We draw inspiration from recent success in applying variational auto-encoding (VAE) models in semisupervised sequence-to-sequence learning (Miao and Blunsom, 2016; Kocisk´y et al., 2016), and propose STRUCTVAE — a principled deep generative approach for semi-supervised learning with tree-structured latent variables (Fig. 1). STRUCTVAE is based on a generative story where the surface NL utterances are generated from treestructured latent MRs following the standard VAE architecture: (1) an off-the-shelf semantic parser functions as the inference model, parsing an observed NL utterance into latent meaning representations (§ 3.2); (2) a reconstruction model decodes the latent MR into the original observed utterance (§ 3.1). This formulation enables our model to perform both standard supervised learning by optimizing the inference model (i.e., the parser) using parallel corpora, and unsupervised learning by maximizing the variational lower bound of the likelihood of the unlabeled utterances (§ 3.3). In addition to these contributions to semisupervised semantic parsing, STRUCTVAE contributes to generative model research as a whole, providing a recipe for training VAEs with structured latent variables. Such a structural latent space is contrast to existing VAE research using flat representations, such as continuous distributed representations (Kingma and Welling, 2013), discrete symbols (Miao and Blunsom, 2016), or hybrids of the two (Zhou and Neubig, 2017). We apply STRUCTVAE to semantic parsing on the ATIS domain and Python code generation. As an auxiliary contribution, we implement a transition-based semantic parser, which uses Abstract Syntax Trees (ASTs, § 3.2) as intermediate MRs and achieves strong results on the two tasks. We then apply this parser as the inference model for semi-supervised learning, and show that with extra unlabeled data, STRUCTVAE outperforms its supervised counterpart. We also demonstrate that STRUCTVAE is compatible with different structured latent representations, applying it to a simple sequence-to-sequence parser which uses λ-calculus logical forms as MRs. 2 Semi-supervised Semantic Parsing In this section we introduce the objectives for semi-supervised semantic parsing, and present high-level intuition in applying VAEs for this task. 2.1 Supervised and Semi-supervised Training Formally, semantic parsing is the task of mapping utterance x to a meaning representation z. As noted above, there are many varieties of MRs that can be represented as either graph structures (e.g., AMR) or tree structures (e.g., λ-calculus and ASTs for programming languages). In this work we specifically focus on tree-structured MRs (see Fig. 2 for a running example Python AST), although application of a similar framework to graph-structured representations is also feasible. Traditionally, purely supervised semantic parsers train a probabilistic model pφ(z|x) using parallel data L of NL utterances and annotated MRs (i.e., L = {⟨x, z⟩}). As noted in the introduction, one major bottleneck in this approach is the lack of such parallel data. Hence, we turn to semi-supervised learning, where the model additionally has access to a relatively large amount of unlabeled NL utterances U = {x}. Semi-supervised learning then aims to maximize the log-likelihood of examples in both L and U: J = X ⟨x,z⟩∈L log pφ(z|x) | {z } supervised obj. Js +α X x∈U log p(x) | {z } unsupervised obj. Ju (1) The joint objective consists of two terms: (1) a supervised objective Js that maximizes the conditional likelihood of annotated MRs, as in standard supervised training of semantic parsers; and (2) a unsupervised objective Ju, which maximizes the marginal likelihood p(x) of unlabeled NL utterances U, controlled by a tuning parameter α. Intuitively, if the modeling of pφ(z|x) and p(x) is coupled (e.g., they share parameters), then optimizing the marginal likelihood p(x) using the unsupervised objective Ju would help the learning of the semantic parser pφ(z|x) (Zhu, 2005). STRUCTVAE uses the variational auto-encoding framework to jointly optimize pφ(z|x) and p(x), as outlined in § 2.2 and detailed in § 3. 2.2 VAEs for Semi-supervised Learning From Eq. (1), our semi-supervised model must be able to calculate the probability p(x) of unlabeled NL utterances. To model p(x), we use VAEs, which provide a principled framework for generative models using neural networks (Kingma and Welling, 2013). As shown in Fig. 1, VAEs define a generative story (bold arrows in Fig. 1, explained in § 3.1) to model p(x), where a latent MR z is 756 sampled from a prior, and then passed to the reconstruction model to decode into the surface utterance x. There is also an inference model qφ(z|x) that allows us to infer the most probable latent MR z given the input x (dashed arrows in Fig. 1, explained in § 3.2). In our case, the inference process is equivalent to the task of semantic parsing if we set qφ(·) ≜pφ(·). VAEs also provide a framework to compute an approximation of p(x) using the inference and reconstruction models, allowing us to effectively optimize the unsupervised and supervised objectives in Eq. (1) in a joint fashion (Kingma et al. (2014), explained in § 3.3). 3 STRUCTVAE: VAEs with Tree-structured Latent Variables 3.1 Generative Story STRUCTVAE follows the standard VAE architecture, and defines a generative story that explains how an NL utterance is generated: a latent meaning representation z is sampled from a prior distribution p(z) over MRs, which encodes the latent semantics of the utterance. A reconstruction model pθ(x|z) then decodes the sampled MR z into the observed NL utterance x. Both the prior p(z) and the reconstruction model p(x|z) takes tree-structured MRs as inputs. To model such inputs with rich internal structures, we follow Konstas et al. (2017), and model the distribution over a sequential surface representation of z, zs instead. Specifically, we have p(z) ≜ p(zs) and pθ(x|z) ≜pθ(x|zs)2. For code generation, zs is simply the surface source code of the AST z. For semantic parsing, zs is the linearized s-expression of the logical form. Linearization allows us to use standard sequence-to-sequence networks to model p(z) and pθ(x|z). As we will explain in § 4.3, we find these two components perform well with linearization. Specifically, the prior is parameterized by a Long Short-Term Memory (LSTM) language model over zs. The reconstruction model is an attentional sequence-to-sequence network (Luong et al., 2015), augmented with a copying mechanism (Gu et al., 2016), allowing an out-ofvocabulary (OOV) entity in zs to be copied to x (e.g., the variable name my list in Fig. 1 and its AST in Fig. 2). We refer readers to Appendix B for details of the neural network architecture. 2Linearizion is used by the prior and the reconstruction model only, and not by the inference model. 3.2 Inference Model STRUCTVAE models the semantic parser pφ(z|x) as the inference model qφ(z|x) in VAE (§ 2.2), which maps NL utterances x into tree-structured meaning representations z. qφ(z|x) can be any trainable semantic parser, with the corresponding MRs forming the structured latent semantic space. In this work, we primarily use a semantic parser based on the Abstract Syntax Description Language (ASDL) framework (Wang et al., 1997) as the inference model. The parser encodes x into ASTs (Fig. 2). ASTs are the native meaning representation scheme of source code in modern programming languages, and can also be adapted to represent other semantic structures, like λ-calculus logical forms (see § 4.2 for details). We remark that STRUCTVAE works with other semantic parsers with different meaning representations as well (e.g., using λ-calculus logical forms for semantic parsing on ATIS, explained in § 4.3). Our inference model is a transition-based parser inspired by recent work in neural semantic parsing and code generation. The transition system is an adaptation of Yin and Neubig (2017) (hereafter YN17), which decomposes the generation process of an AST into sequential applications of treeconstruction actions following the ASDL grammar, thus ensuring the syntactic well-formedness of generated ASTs. Different from YN17, where ASTs are represented as a Context Free Grammar learned from a parsed corpus, we follow Rabinovich et al. (2017) and use ASTs defined under the ASDL formalism (§ 3.2.1). 3.2.1 Generating ASTs with ASDL Grammar First, we present a brief introduction to ASDL. An AST can be generated by applying typed constructors in an ASDL grammar, such as those in Fig. 3 for the Python ASDL grammar. Each constructor specifies a language construct, and is assigned to a particular composite type. For example, the constructor Call has type expr (expression), and it denotes function calls. Constructors are associated with multiple fields. For instance, the Call constructor and has three fields: func, args and keywords. Like constructors, fields are also strongly typed. For example, the func field of Call has expr type. Fields with composite types are instantiated by constructors of the same type, while fields with primitive types store values (e.g., identifier names or string literals). Each field also has 757 Expr Name Call value func Name args keywords Keyword sorted id id my_list reverse Name value arg True t1 t2 t3 t4 t5 t6 t8 t9 t10 t11 id sorted(my_list, reverse=True) t Frontier Field Action t1 stmt root Expr(expr value) t2 expr value Call(expr func, expr* args, keyword* keywords) t3 expr func Name(identifier id) t4 identifier id GENTOKEN[sorted] t5 expr* args Name(identifier id) t6 identifier id GENTOKEN[my list] t7 expr* args REDUCE (close the frontier field) t8 keyword* keywords keyword(identifier arg, expr value) t9 identifier arg GENTOKEN[reverse] t10 expr value Name(identifier id) t11 identifier id GENTOKEN[True] t12 keyword* keywords REDUCE (close the frontier field) Figure 2: Left An example ASDL AST with its surface source code. Field names are labeled on upper arcs. Blue squares denote fields with sequential cardinality. Grey nodes denote primitive identifier fields, with annotated values. Fields are labeled with time steps at which they are generated. Right Action sequences used to construct the example AST. Frontier fields are denoted by their signature (type name). Each constructor in the Action column refers to an APPLYCONSTR action. stmt FunctionDef(identifier name, arguments args, stmt* body) | ClassDef(identifier name, expr* bases, stmt* body) | Expr(expr value) | Return(expr? value) 7! expr Call(expr func, expr* args, keyword* keywords) | Name(identifier id) | Str(string s) 7! Figure 3: Excerpt of the python abstract syntax grammar (Python Software Foundation, 2016) a cardinality (single, optional ?, and sequential ∗), specifying the number of values the field has. Each node in an AST corresponds to a typed field in a constructor (except for the root node). Depending on the cardinality of the field, an AST node can be instantiated with one or multiple constructors. For instance, the func field in the example AST has single cardinality, and is instantiated with a Name constructor; while the args field with sequential cardinality could have multiple constructors (only one shown in this example). Our parser employs a transition system to generate an AST using three types of actions. Fig. 2 (Right) lists the sequence of actions used to generate the example AST. The generation process starts from an initial derivation with only a root node of type stmt (statement), and proceeds according to the top-down, left-to-right traversal of the AST. At each time step, the parser applies an action to the frontier field of the derivation: APPLYCONSTR[c] actions apply a constructor c to the frontier composite field, expanding the derivation using the fields of c. For fields with single or optional cardinality, an APPLYCONSTR action instantiates the empty frontier field using the constructor, while for fields with sequential cardinality, it appends the constructor to the frontier field. For example, at t2 the Call constructor is applied to the value field of Expr, and the derivation is expanded using its three child fields. REDUCE actions complete generation of a field with optional or multiple cardinalities. For instance, the args field is instantiated by Name at t5, and then closed by a REDUCE action at t7. GENTOKEN[v] actions populate an empty primitive frontier field with token v. A primitive field whose value is a single token (e.g., identifier fields) can be populated with a single GENTOKEN action. Fields of string type can be instantiated using multiple such actions, with a final GENTOKEN[</f>] action to terminate the generation of field values. 3.2.2 Modeling qφ(z|x) The probability of generating an AST z is naturally decomposed into the probabilities of the actions {at} used to construct z: qφ(z|x) = Y t p(at|a<t, x). Following YN17, we parameterize qφ(z|x) using a sequence-to-sequence network with auxiliary recurrent connections following the topology of the AST. Interested readers are referred to Appendix B and Yin and Neubig (2017) for details of the neural network architecture. 3.3 Semi-supervised Learning In this section we explain how to optimize the semi-supervised learning objective Eq. (1) in STRUCTVAE. Supervised Learning For the supervised learning objective, we modify Js, and use the labeled data to optimize both the inference model (the se758 mantic parser) and the reconstruction model: Js ≜ X (x,z)∈L log qφ(z|x) + log pθ(x|z)  (2) Unsupervised Learning To optimize the unsupervised learning objective Ju in Eq. (1), we maximize the variational lower-bound of log p(x): log p(x) ≥Ez∼qφ(z|x) log pθ(x|z)  −λ · KL[qφ(z|x)||p(z)] = L (3) where KL[qφ||p] is the Kullback-Leibler (KL) divergence. Following common practice in optimizing VAEs, we introduce λ as a tuning parameter of the KL divergence to control the impact of the prior (Miao and Blunsom, 2016; Bowman et al., 2016). To optimize the parameters of our model in the face of non-differentiable discrete latent variables, we follow Miao and Blunsom (2016), and approximate ∂L ∂φ using the score function estimator (a.k.a. REINFORCE, Williams (1992)): ∂L ∂φ = ∂ ∂φEz∼qφ(z|x)  log pθ(x|z) −λ log qφ(z|x) −log p(z)  | {z } learning signal = ∂ ∂φEz∼qφ(z|x)l′(x, z) ≈ 1 |S(x)| X zi∈S(x) l′(x, zi)∂log qφ(zi|x) ∂φ (4) where we approximate the gradient using a set of samples S(x) drawn from qφ(·|x). To ensure the quality of sampled latent MRs, we follow Guu et al. (2017) and use beam search. The term l′(x, z) is defined as the learning signal (Miao and Blunsom, 2016). The learning signal weights the gradient for each latent sample z. In REINFORCE, to cope with the high variance of the learning signal, it is common to use a baseline b(x) to stabilize learning, and re-define the learning signal as l(x, z) ≜l′(x, z) −b(x). (5) Specifically, in STRUCTVAE, we define b(x) = a · log p(x) + c, (6) where log p(x) is a pre-trained LSTM language model. This is motivated by the empirical observation that log p(x) correlates well with the reconstruction score log pθ(x|z), hence with l′(x, z). Finally, for the reconstruction model, its gradient can be easily computed: ∂L ∂θ ≈ 1 |S(x)| X zi∈S(x) ∂log pθ(x|zi) ∂θ . Discussion Perhaps the most intriguing question here is why semi-supervised learning could improve semantic parsing performance. While the underlying theoretical exposition still remains an active research problem (Singh et al., 2008), in this paper we try to empirically test some likely hypotheses. In Eq. (4), the gradient received by the inference model from each latent sample z is weighed by the learning signal l(x, z). l(x, z) can be viewed as the reward function in REINFORCE learning. It can also be viewed as weights associated with pseudo-training examples {⟨x, z⟩: z ∈ S(x)} sampled from the inference model. Intuitively, a sample z with higher rewards should: (1) have z adequately encode the input, leading to high reconstruction score log pθ(x|z); and (2) have z be succinct and natural, yielding high prior probability. Let z∗denote the gold-standard MR of x. Consider the ideal case where z∗∈S(x) and l(x, z∗) is positive, while l(x, z′) is negative for other imperfect samples z′ ∈S(x), z′ ̸= z∗. In this ideal case, ⟨x, z∗⟩would serve as a positive training example and other samples ⟨x, z′⟩would be treated as negative examples. Therefore, the inference model would receive informative gradient updates, and learn to discriminate between gold and imperfect MRs. This intuition is similar in spirit to recent efforts in interpreting gradient update rules in reinforcement learning (Guu et al., 2017). We will present more empirical statistics and observations in § 4.3. 4 Experiments 4.1 Datasets In our semi-supervised semantic parsing experiments, it is of interest how STRUCTVAE could further improve upon a supervised parser with extra unlabeled data. We evaluate on two datasets: Semantic Parsing We use the ATIS dataset, a collection of 5,410 telephone inquiries of flight booking (e.g., “Show me flights from ci0 to ci1”). The target MRs are defined using λ-calculus logical forms (e.g., “lambda $0 e (and (flight $0) (from $ci0) (to $ci1))”). We use the pre-processed dataset released by Dong and Lapata (2016), where entities (e.g., cities) are canonicalized using typed slots (e.g., ci0). To predict λ759 calculus logical forms using our transition-based parser, we use the ASDL grammar defined by Rabinovich et al. (2017) to convert between logical forms and ASTs (see Appendix C for details). Code Generation The DJANGO dataset (Oda et al., 2015) contains 18,805 lines of Python source code extracted from the Django web framework. Each line of code is annotated with an NL utterance. Source code in the DJANGO dataset exhibits a wide variety of real-world use cases of Python, including IO operation, data structure manipulation, class/function definition, etc. We use the pre-processed version released by Yin and Neubig (2017) and use the astor package to convert ASDL ASTs into Python source code. 4.2 Setup Labeled and Unlabeled Data STRUCTVAE requires access to extra unlabeled NL utterances for semi-supervised learning. However, the datasets we use do not accompany with such data. We therefore simulate the semi-supervised learning scenario by randomly sub-sampling K examples from the training split of each dataset as the labeled set L. To make the most use of the NL utterances in the dataset, we construct the unlabeled set U using all NL utterances in the training set3,4. Training Procedure Optimizing the unsupervised learning objective Eq. (3) requires sampling structured MRs from the inference model qφ(z|x). Due to the complexity of the semantic parsing problem, we cannot expect any valid samples from randomly initialized qφ(z|x). We therefore pre-train the inference and reconstruction models using the supervised objective Eq. (2) until convergence, and then optimize using the semisupervised learning objective Eq. (1). Throughout all experiments we set α (Eq. (1)) and λ (Eq. (3)) to 0.1. The sample size |S(x)| is 5. We observe that the variance of the learning signal could still be high when low-quality samples are drawn from the inference model qφ(z|x). We therefore clip 3We also tried constructing U using the disjoint portion of the NL utterances not presented in the labeled set L, but found this yields slightly worse performance, probably due to lacking enough unlabeled data. Interpreting these results would be an interesting avenue for future work. 4While it might be relatively easy to acquire additional unlabeled utterances in practical settings (e.g., through query logs of a search engine), unfortunately most academic semantic parsing datasets, like the ones used in this work, do not feature large sets of in-domain unlabeled data. We therefore perform simulated experiments instead. |L| SUP. SELFTRAIN STRUCTVAE 500 63.2 65.3 66.0 1,000 74.6 74.2 75.7 2,000 80.4 83.3 82.4 3,000 82.8 83.6 83.6 4,434 (All) 85.3 – 84.5 Previous Methods ACC. ZC07 (Zettlemoyer and Collins, 2007) 84.6 WKZ14 (Wang et al., 2014) 91.3 SEQ2TREE (Dong and Lapata, 2016)† 84.6 ASN (Rabinovich et al., 2017)† 85.3 + supervised attention 85.9 Table 1: Performance on ATIS w.r.t. the size of labeled training data L. †Existing neural network-based methods |L| SUP. SELFTRAIN STRUCTVAE 1,000 49.9 49.5 52.0 2,000 56.6 55.8 59.0 3,000 61.0 61.4 62.4 5,000 63.2 64.5 65.6 8,000 70.3 69.6 71.5 12,000 71.1 71.6 72.0 16,000 (All) 73.7 – 72.3 Previous Method ACC. YN17 (Yin and Neubig, 2017) 71.6 Table 2: Performance on DJANGO w.r.t. the size of labeled training data L all learning signals lower than k = −20.0. Earlystopping is used to avoid over-fitting. We also pretrain the prior p(z) (§ 3.3) and the baseline function Eq. (6). Readers are referred to Appendix D for more detail of the configurations. Metric As standard in semantic parsing research, we evaluate by exact-match accuracy. 4.3 Main Results Tab. 1 and Tab. 2 list the results on ATIS and DJANGO, resp, with varying amounts of labeled data L. We also present results of training the transition-based parser using only the supervised objective (SUP., Eq. (2)). We also compare STRUCTVAE with self-training (SELFTRAIN), a semi-supervised learning baseline which uses the supervised parser to predict MRs for unlabeled utterances in U −L, and adds the predicted examples to the training set to fine-tune the supervised model. Results for STRUCTVAE are averaged over four runs to account for the additional fluctuation caused by REINFORCE training. Supervised System Comparison First, to highlight the effectiveness of our transition parser based on ASDL grammar (hence the reliability of 760 −30 −20 −10 0 10 20 0.0 0.1 0.2 z∗(ˆµ = 2.59, ˆσ = 30.80) z′(ˆµ = −5.12, ˆσ = 214.62) (a) DJANGO −30 −20 −10 0 10 20 0.0 0.1 0.2 z∗(ˆµ = 0.94, ˆσ = 19.06) z′(ˆµ = −3.35, ˆσ = 96.66) (b) ATIS Figure 4: Histograms of learning signals on DJANGO (|L| = 5000) and ATIS (|L| = 2000). Difference in sample means is statistically significant (p < 0.05). our supervised baseline), we compare the supervised version of our parser with existing parsing models. On ATIS, our supervised parser trained on the full data is competitive with existing neural network based models, surpassing the SEQ2TREE model, and on par with the Abstract Syntax Network (ASN) without using extra supervision. On DJANGO, our model significantly outperforms the YN17 system, probably because the transition system used by our parser is defined natively to construct ASDL ASTs, reducing the number of actions for generating each example. On DJANGO, the average number of actions is 14.3, compared with 20.3 reported in YN17. Semi-supervised Learning Next, we discuss our main comparison between STRUCTVAE with the supervised version of the parser (recall that the supervised parser is used as the inference model in STRUCTVAE, § 3.2). First, comparing our proposed STRUCTVAE with the supervised parser when there are extra unlabeled data (i.e., |L| < 4, 434 for ATIS and |L| < 16, 000 for DJANGO), semi-supervised learning with STRUCTVAE consistently achieves better performance. Notably, on DJANGO, our model registers results as competitive as previous state-of-the-art method (YN17) using only half the training data (71.5 when |L| = 8000 v.s. 71.6 for YN17). This demonstrates that STRUCTVAE is capable of learning from unlabeled NL utterances by inferring high quality, structurally rich latent meaning representations, further improving the performance of its supervised counterpart that is already competitive. Second, comparing STRUCTVAE with self-training, we find STRUCTVAE outperforms SELFTRAIN in eight out of ten settings, while SELFTRAIN 1 2 3 4 5 0.0 0.4 0.8 (a) DJANGO 1 2 3 4 5 0.0 0.4 0.8 (b) ATIS Figure 5: Distribution of the rank of l(x, z∗) in sampled set under-performs the supervised parser in four out of ten settings. This shows self-training does not necessarily yield stable gains while STRUCTVAE does. Intuitively, STRUCTVAE would perform better since it benefits from the additional signal of the quality of MRs from the reconstruction model (§ 3.3), for which we present more analysis in our next set of experiments. For the sake of completeness, we also report the results of STRUCTVAE when L is the full training set. Note that in this scenario there is no extra unlabeled data disjoint with the labeled set, and not surprisingly, STRUCTVAE does not outperform the supervised parser. In addition to the supervised objective Eq. (2) used by the supervised parser, STRUCTVAE has the extra unsupervised objective Eq. (3), which uses sampled (probably incorrect) MRs to update the model. When there is no extra unlabeled data, those sampled (incorrect) MRs add noise to the optimization process, causing STRUCTVAE to under-perform. Study of Learning Signals As discussed in § 3.3, in semi-supervised learning, the gradient received by the inference model from each sampled latent MR is weighted by the learning signal. Empirically, we would expect that on average, the learning signals of gold-standard samples z∗, l(x, z∗), are positive, larger than those of other (imperfect) samples z′, l(x, z′). We therefore study the statistics of l(x, z∗) and l(x, z′) for all utterances x ∈U −L, i.e., the set of utterances which are not included in the labeled set.5 The statistics are obtained by performing inference using trained models. Figures 4a and 4b depict the histograms of learning signals on DJANGO and ATIS, resp. We observe that the learning signals for gold samples concentrate on positive intervals. We also show the mean and variance of the learning signals. On average, we have l(x, z∗) being positive and l(x, z) negative. Also note that the distribution of l(x, z∗) has smaller variance and is more concentrated. Therefore the inference model receives informative gradient updates to discriminate between gold and imperfect 5We focus on cases where z∗is in the sample set S(x). 761 NL join p and cmd into a file path, substitute it for f zs 1 f = os.path.join(p, cmd)  log q(z|x) = −1.00 log p(x|z) = −2.00 log p(z) = −24.33 l(x, z) = 9.14 zs 2 p = path.join(p, cmd)  log q(z|x) = −8.12 log p(x|z) = −20.96 log p(z) = −27.89 l(x, z) = −9.47 NL append i-th element of existing to child loggers zs 1 child loggers.append(existing[i])  log q(z|x) = −2.38 log p(x|z) = −9.66 log p(z) = −13.52 l(x, z) = 1.32 zs 2 child loggers.append(existing[existing]) log q(z|x) = −1.83 log p(x|z) = −16.11 log p(z) = −12.43 l(x, z) = −5.08 NL split string pks by ’,’, substitute the result for primary keys zs 1 primary keys = pks.split(’,’)  log q(z|x) = −2.38 log p(x|z) = −11.39 log p(z) = −10.24 l(x, z) = 2.05 zs 2 primary keys = pks.split + ’,’  log q(z|x) = −0.84 log p(x|z) = −14.87 log p(z) = −20.41 l(x, z) = −2.60 Table 3: Inferred latent MRs on DJANGO (|L| = 5000). For simplicity we show the surface representation of MRs (zs, source code) instead. samples. Next, we plot the distribution of the rank of l(x, z∗), among the learning signals of all samples of x, {l(x, zi) : zi ∈S(x)}. Results are shown in Fig. 5. We observe that the gold samples z∗have the largest learning signals in around 80% cases. We also find that when z∗has the largest learning signal, its average difference with the learning signal of the highest-scoring incorrect sample is 1.27 and 0.96 on DJANGO and ATIS, respectively. Finally, to study the relative contribution of the reconstruction score log p(x|z) and the prior log p(z) to the learning signal, we present examples of inferred latent MRs during training (Tab. 3). Examples 1&2 show that the reconstruction score serves as an informative quality measure of the latent MR, assigning the correct samples zs 1 with high log p(x|z), leading to positive learning signals. This is in line with our assumption that a good latent MR should adequately encode the semantics of the utterance. Example 3 shows that the prior is also effective in identifying “unnatural” MRs (e.g., it is rare to add a function and a string literal, as in zs 2). These results also suggest that the prior and the reconstruction model perform well with linearization of MRs. Finally, note that in Examples 2&3 the learning signals for the correct samples zs 1 are positive even if their inference scores q(z|x) are lower than those of zs 2. |L| SUPERVISED STRUCTVAE-SEQ 500 47.3 55.6 1,000 62.5 73.1 2,000 73.9 74.8 3,000 80.6 81.3 4,434 (All) 84.6 84.2 Table 4: Performance of the STRUCTVAE-SEQ on ATIS w.r.t. the size of labeled training data L ATIS DJANGO |L| SUP. MLP LM |L| SUP. MLP LM 500 63.2 61.5† 66.0 1,000 49.9 47.0† 52.0 1,000 74.6 76.3 75.7 5,000 63.2 62.5† 65.6 2,000 80.4 82.9 82.4 8,000 70.3 67.6† 71.5 3,000 82.8 81.4† 83.6 12,000 71.1 71.6 72.0 Table 5: Comparison of STRUCTVAE with different baseline functions b(x), italic†: semi-supervised learning with the MLP baseline is worse than supervised results. This result further demonstrates that learning signals provide informative gradient weights for optimizing the inference model. Generalizing to Other Latent MRs Our main results are obtained using a strong AST-based semantic parser as the inference model, with copyaugmented reconstruction model and an LSTM language model as the prior. However, there are many other ways to represent and infer structure in semantic parsing (Carpenter, 1998; Steedman, 2000), and thus it is of interest whether our basic STRUCTVAE framework generalizes to other semantic representations. To examine this, we test STRUCTVAE using λ-calculus logical forms as latent MRs for semantic parsing on the ATIS domain. We use standard sequence-to-sequence networks with attention (Luong et al., 2015) as inference and reconstruction models. The inference model is trained to construct a tree-structured logical form using the transition actions defined in Cheng et al. (2017). We use a classical tri-gram Kneser-Ney language model as the prior. Tab. 4 lists the results for this STRUCTVAE-SEQ model. We can see that even with this very different model structure STRUCTVAE still provides significant gains, demonstrating its compatibility with different inference/reconstruction networks and priors. Interestingly, compared with the results in Tab. 1, we found that the gains are especially larger with few labeled examples — STRUCTVAE-SEQ achieves improvements of 8-10 points when |L| < 1000. These results suggest that semi-supervision is especially useful in improving a mediocre parser in low resource settings. 762 0.0 0.2 0.4 0.6 0.8 1.0 λ 0.62 0.64 0.66 Accuracy StructVAE Sup. Figure 6: Performance on DJANGO (|L| = 5000) w.r.t. the KL weight λ 1000 5000 8000 12000 14000 16000 Size of Unlabeled Data 0.645 0.650 0.655 Accuracy StructVAE Figure 7: Performance on DJANGO (|L| = 5000) w.r.t. the size of unlabeled data U Impact of Baseline Functions In § 3.3 we discussed our design of the baseline function b(x) incorporated in the learning signal (Eq. (4)) to stabilize learning, which is based on a language model (LM) over utterances (Eq. (6)). We compare this baseline with a commonly used one in REINFORCE training: the multi-layer perceptron (MLP). The MLP takes as input the last hidden state of the utterance given by the encoding LSTM of the inference model. Tab. 5 lists the results over sampled settings. We found that although STRUCTVAE with the MLP baseline sometimes registers better performance on ATIS, in most settings it is worse than our LM baseline, and could be even worse than the supervised parser. On the other hand, our LM baseline correlates well with the learning signal, yielding stable improvements over the supervised parser. This suggests the importance of using carefully designed baselines in REINFORCE learning, especially when the reward signal has large range (e.g., log-likelihoods). Impact of the Prior p(z) Fig. 6 depicts the performance of STRUCTVAE as a function of the KL term weight λ in Eq. (3). When STRUCTVAE degenerates to a vanilla auto-encoder without the prior distribution (i.e., λ = 0), it under-performs the supervised baseline. This is in line with our observation in Tab. 3 showing that the prior helps identify unnatural samples. The performance of the model also drops when λ > 0.1, suggesting that empirically controlling the influence of the prior to the inference model is important. Impact of Unlabeled Data Size Fig. 7 illustrates the accuracies w.r.t. the size of unlabeled data. STRUCTVAE yields consistent gains as the size of the unlabeled data increases. 5 Related Works Semi-supervised Learning for NLP Semisupervised learning comes with a long history (Zhu, 2005), with applications in NLP from early work of self-training (Yarowsky, 1995), and graph-based methods (Das and Smith, 2011), to recent advances in auto-encoders (Cheng et al., 2016; Socher et al., 2011; Zhang et al., 2017) and deep generative methods (Xu et al., 2017). Our work follows the line of neural variational inference for text processing (Miao et al., 2016), and resembles Miao and Blunsom (2016), which uses VAEs to model summaries as discrete latent variables for semi-supervised summarization, while we extend the VAE architecture for more complex, tree-structured latent variables. Semantic Parsing Most existing works alleviate issues of limited parallel data through weaklysupervised learning, using the denotations of MRs as indirect supervision (Reddy et al., 2014; Krishnamurthy et al., 2016; Neelakantan et al., 2016; Pasupat and Liang, 2015; Yin et al., 2016). For semi-supervised learning of semantic parsing, Kate and Mooney (2007) first explore using transductive SVMs to learn from a semantic parser’s predictions. Konstas et al. (2017) apply self-training to bootstrap an existing parser for AMR parsing. Kocisk´y et al. (2016) employ VAEs for semantic parsing, but in contrast to STRUCTVAE’s structured representation of MRs, they model NL utterances as flat latent variables, and learn from unlabeled MR data. There have also been efforts in unsupervised semantic parsing, which exploits external linguistic analysis of utterances (e.g., dependency trees) and the schema of target knowledge bases to infer the latent MRs (Poon and Domingos, 2009; Poon, 2013). Another line of research is domain adaptation, which seeks to transfer a semantic parser learned from a source domain to the target domain of interest, therefore alleviating the need of parallel data from the target domain (Su and Yan, 2017; Fan et al., 2017; Herzig and Berant, 2018). 6 Conclusion We propose STRUCTVAE, a deep generative model with tree-structured latent variables for semi-supervised semantic parsing. We apply STRUCTVAE to semantic parsing and code generation tasks, and show it outperforms a strong supervised parser using extra unlabeled data. 763 References Dzmitry Bahdanau, Kyunghyun Cho, and Yoshua Bengio. 2015. Neural machine translation by jointly learning to align and translate. In Proceedings of ICLR. Laura Banarescu, Claire Bonial, Shu Cai, Madalina Georgescu, Kira Griffitt, Ulf Hermjakob, Kevin Knight, Philipp Koehn, Martha Palmer, and Nathan Schneider. 2013. Abstract meaning representation for sembanking. In Proceedings of LAW-ID@ACL. Jonathan Berant, Andrew Chou, Roy Frostig, and Percy Liang. 2013. Semantic parsing on freebase from question-answer pairs. In Proceedings of EMNLP. Samuel R. Bowman, Luke Vilnis, Oriol Vinyals, Andrew M. Dai, Rafal J´ozefowicz, and Samy Bengio. 2016. Generating sentences from a continuous space. In Proceedings of the SIGNLL. Bob Carpenter. 1998. Type-logical Semantics. Jianpeng Cheng, Siva Reddy, Vijay Saraswat, and Mirella Lapata. 2017. Learning structured natural language representations for semantic parsing. In Proceedings of ACL. Yong Cheng, Wei Xu, Zhongjun He, Wei He, Hua Wu, Maosong Sun, and Yang Liu. 2016. Semisupervised learning for neural machine translation. In Proceedings of ACL. James Clarke, Dan Goldwasser, Ming-Wei Chang, and Dan Roth. 2010. Driving semantic parsing from the world’s response. In Proceedings of CoNLL. Dipanjan Das and Noah A. Smith. 2011. Semisupervised frame-semantic parsing for unknown predicates. In Proceedings of HLT. Li Dong and Mirella Lapata. 2016. Language to logical form with neural attention. In Proceedings of ACL. Xing Fan, Emilio Monti, Lambert Mathias, and Markus Dreyer. 2017. Transfer learning for neural semantic parsing. In Proceedings of the 2nd Workshop on Representation Learning for NLP. Jiatao Gu, Zhengdong Lu, Hang Li, and Victor O. K. Li. 2016. Incorporating copying mechanism in sequence-to-sequence learning. In Proceedings of ACL. Kelvin Guu, Panupong Pasupat, Evan Zheran Liu, and Percy Liang. 2017. From language to programs: Bridging reinforcement learning and maximum marginal likelihood. In Proceedings of ACL. Jonathan Herzig and Jonathan Berant. 2018. Decoupling structure and lexicon for zero-shot semantic parsing. arXiv preprint arXiv:1804.07918 . Srinivasan Iyer, Ioannis Konstas, Alvin Cheung, Jayant Krishnamurthy, and Luke Zettlemoyer. 2017. Learning a neural semantic parser from user feedback. In Proceedings of ACL. Robin Jia and Percy Liang. 2016. Data recombination for neural semantic parsing. In Proceedings of ACL. Rohit J. Kate and Raymond J. Mooney. 2007. Semisupervised learning for semantic parsing using support vector machines. In Proceedings of NAACLHLT. Diederik P. Kingma and Jimmy Ba. 2014. Adam: A method for stochastic optimization. CoRR abs/1412.6980. Diederik P Kingma, Shakir Mohamed, Danilo Jimenez Rezende, and Max Welling. 2014. Semi-supervised learning with deep generative models. In Proceedings of NIPS. Diederik P Kingma and Max Welling. 2013. Autoencoding variational bayes. arXiv preprint arXiv:1312.6114 . Tom´as Kocisk´y, G´abor Melis, Edward Grefenstette, Chris Dyer, Wang Ling, Phil Blunsom, and Karl Moritz Hermann. 2016. Semantic parsing with semi-supervised sequential autoencoders. In Proceedings of EMNLP. Ioannis Konstas, Srinivasan Iyer, Mark Yatskar, Yejin Choi, and Luke Zettlemoyer. 2017. Neural amr: Sequence-to-sequence models for parsing and generation. In Proceedings of ACL. Jayant Krishnamurthy, Oyvind Tafjord, and Aniruddha Kembhavi. 2016. Semantic parsing to probabilistic programs for situated question answering. In Proceedings of EMNLP. Chen Liang, Jonathan Berant, Quoc Le, Kenneth D. Forbus, and Ni Lao. 2017. Neural symbolic machines: Learning semantic parsers on freebase with weak supervision. In Proceedings of ACL. Percy Liang, Michael I. Jordan, and Dan Klein. 2011. Learning dependency-based compositional semantics. In Proceedings of ACL. Wang Ling, Phil Blunsom, Edward Grefenstette, Karl Moritz Hermann, Tom´as Kocisk´y, Fumin Wang, and Andrew Senior. 2016. Latent predictor networks for code generation. In Proceedings of ACL. Thang Luong, Hieu Pham, and Christopher D. Manning. 2015. Effective approaches to attentionbased neural machine translation. In Proceedings of EMNLP. Yishu Miao and Phil Blunsom. 2016. Language as a latent variable: Discrete generative models for sentence compression. In Proceedings of EMNLP. Yishu Miao, Lei Yu, and Phil Blunsom. 2016. Neural variational inference for text processing. In Proceedings of ICML. 764 Dipendra K. Misra and Yoav Artzi. 2016. Neural shiftreduce CCG semantic parsing. In Proceedings of EMNLP. Arvind Neelakantan, Quoc V. Le, and Ilya Sutskever. 2016. Neural programmer: Inducing latent programs with gradient descent. In Proceedings of ICLR. Yusuke Oda, Hiroyuki Fudaba, Graham Neubig, Hideaki Hata, Sakriani Sakti, Tomoki Toda, and Satoshi Nakamura. 2015. Learning to generate pseudo-code from source code using statistical machine translation (T). In Proceedings of ASE. Panupong Pasupat and Percy Liang. 2015. Compositional semantic parsing on semi-structured tables. In Proceedings of ACL. Hoifung Poon. 2013. Grounded unsupervised semantic parsing. In Proceedings of ACL. Hoifung Poon and Pedro Domingos. 2009. Unsupervised semantic parsing. In Proceedings of EMNLP. Python Software Foundation. 2016. Python abstract grammar. https://docs.python.org/2/library/ast.html. Chris Quirk, Raymond J. Mooney, and Michel Galley. 2015. Language to code: Learning semantic parsers for if-this-then-that recipes. In Proceedings of ACL. Maxim Rabinovich, Mitchell Stern, and Dan Klein. 2017. Abstract syntax networks for code generation and semantic parsing. In Proceedings of ACL. Siva Reddy, Mirella Lapata, and Mark Steedman. 2014. Large-scale semantic parsing without questionanswer pairs. Transactions of ACL . Aarti Singh, Robert D. Nowak, and Xiaojin Zhu. 2008. Unlabeled data: Now it helps, now it doesn’t. In Proceedings of NIPS. Richard Socher, Jeffrey Pennington, Eric H. Huang, Andrew Y. Ng, and Christopher D. Manning. 2011. Semi-supervised recursive autoencoders for predicting sentiment distributions. In Proceedings of EMNLP. Mark Steedman. 2000. The Syntactic Process. Yu Su and Xifeng Yan. 2017. Cross-domain semantic parsing via paraphrasing. In Proceedings of EMNLP. Lappoon R. Tang and Raymond J. Mooney. 2001. Using multiple clause constructors in inductive logic programming for semantic parsing. In Proceedings of ECML. Oriol Vinyals, Meire Fortunato, and Navdeep Jaitly. 2015. Pointer networks. In Proceedings of NIPS. Adrienne Wang, Tom Kwiatkowski, and Luke Zettlemoyer. 2014. Morpho-syntactic lexical generalization for ccg semantic parsing. In Proceedings of EMNLP. Daniel C. Wang, Andrew W. Appel, Jeffrey L. Korn, and Christopher S. Serra. 1997. The zephyr abstract syntax description language. In Proceedings of DSL. Yushi Wang, Jonathan Berant, and Percy Liang. 2015. Building a semantic parser overnight. In Proceedings of ACL. Ronald J. Williams. 1992. Simple statistical gradientfollowing algorithms for connectionist reinforcement learning. Machine Learning . Chunyang Xiao, Marc Dymetman, and Claire Gardent. 2016. Sequence-based structured prediction for semantic parsing. In Proceedings of ACL. Weidi Xu, Haoze Sun, Chao Deng, and Ying Tan. 2017. Variational autoencoder for semi-supervised text classification. In Proceedings of AAAI. David Yarowsky. 1995. Unsupervised word sense disambiguation rivaling supervised methods. In Proceedings of ACL. Wen-tau Yih, Ming-Wei Chang, Xiaodong He, and Jianfeng Gao. 2015. Semantic parsing via staged query graph generation: Question answering with knowledge base. In Proceedings of ACL. Pengcheng Yin, Zhengdong Lu, Hang Li, and Ben Kao. 2016. Neural enquirer: Learning to query tables in natural language. In Proceedings of IJCAI. Pengcheng Yin and Graham Neubig. 2017. A syntactic neural model for general-purpose code generation. In Proceedings of ACL. Wojciech Zaremba, Ilya Sutskever, and Oriol Vinyals. 2014. Recurrent neural network regularization. arXiv preprint arXiv:1409.2329 . John M. Zelle and Raymond J. Mooney. 1996. Learning to parse database queries using inductive logic programming. In Proceedings of AAAI. Luke Zettlemoyer and Michael Collins. 2005. Learning to map sentences to logical form structured classification with probabilistic categorial grammars. In Proceedings of UAI. Luke S. Zettlemoyer and Michael Collins. 2007. Online learning of relaxed ccg grammars for parsing to logical form. In Proceedings of EMNLP-CoNLL. Xiao Zhang, Yong Jiang, Hao Peng, Kewei Tu, and Dan Goldwasser. 2017. Semi-supervised structured prediction with neural crf autoencoder. In Proceedings of EMNLP. Victor Zhong, Caiming Xiong, and Richard Socher. 2017. Seq2sql: Generating structured queries from natural language using reinforcement learning. arXiv preprint arXiv:1709.00103 . 765 Chunting Zhou and Graham Neubig. 2017. Multispace variational encoder-decoders for semisupervised labeled sequence transduction. In Proceedings of ACL. Xiaojin Zhu. 2005. Semi-supervised learning literature survey. Technical Report 1530, Computer Sciences, University of Wisconsin-Madison.
2018
70
Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Long Papers), pages 766–777 Melbourne, Australia, July 15 - 20, 2018. c⃝2018 Association for Computational Linguistics 766 Sequence-to-Action: End-to-End Semantic Graph Generation for Semantic Parsing Bo Chen†‡, Le Sun†, Xianpei Han† †State Key Laboratory of Computer Science Institute of Software, Chinese Academy of Sciences, Beijing, China ‡University of Chinese Academy of Sciences, Beijing, China {chenbo,sunle,xianpei}@iscas.ac.cn Abstract This paper proposes a neural semantic parsing approach – Sequence-to-Action, which models semantic parsing as an endto-end semantic graph generation process. Our method simultaneously leverages the advantages from two recent promising directions of semantic parsing. Firstly, our model uses a semantic graph to represent the meaning of a sentence, which has a tight-coupling with knowledge bases. Secondly, by leveraging the powerful representation learning and prediction ability of neural network models, we propose a RNN model which can effectively map sentences to action sequences for semantic graph generation. Experiments show that our method achieves state-of-the-art performance on OVERNIGHT dataset and gets competitive performance on GEO and ATIS datasets. 1 Introduction Semantic parsing aims to map natural language sentences to logical forms (Zelle and Mooney, 1996; Zettlemoyer and Collins, 2005; Wong and Mooney, 2007; Lu et al., 2008; Kwiatkowski et al., 2013). For example, the sentence “Which states border Texas?” will be mapped to answer (A, (state (A), next to (A, stateid ( texas )))). A semantic parser needs two functions, one for structure prediction and the other for semantic grounding. Traditional semantic parsers are usually based on compositional grammar, such as CCG (Zettlemoyer and Collins, 2005, 2007), DCS (Liang et al., 2011), etc. These parsers compose structure using manually designed grammars, use lexicons for semantic grounding, and exploit feaSequence-to-Action RNN Model Sentence Action Sequence Semantic Graph Generate Construct Constraints KB Which states border Texas? add_variable: A add_type: state arg_node: A add_entity: texas:st add_edge: next_to arg_node: A arg_node: texas:st return: A A next_to type state return texas:st Figure 1: Overview of our method, with a demonstration example. tures for candidate logical forms ranking. Unfortunately, it is challenging to design grammars and learn accurate lexicons, especially in wideopen domains. Moreover, it is often hard to design effective features, and its learning process is not end-to-end. To resolve the above problems, two promising lines of work have been proposed: Semantic graph-based methods and Seq2Seq methods. Semantic graph-based methods (Reddy et al., 2014, 2016; Bast and Haussmann, 2015; Yih et al., 2015) represent the meaning of a sentence as a semantic graph (i.e., a sub-graph of a knowledge base, see example in Figure 1) and treat semantic parsing as a semantic graph matching/generation process. Compared with logical forms, semantic graphs have a tight-coupling with knowledge bases (Yih et al., 2015), and share many commonalities with syntactic structures (Reddy et al., 2014). Therefore both the structure and semantic constraints from knowledge bases can be easily exploited during parsing (Yih et al., 2015). The main challenge of semantic graph-based parsing is how to effectively construct the semantic graph of a sentence. Currently, semantic graphs 767 are either constructed by matching with patterns (Bast and Haussmann, 2015), transforming from dependency tree (Reddy et al., 2014, 2016), or via a staged heuristic search algorithm (Yih et al., 2015). These methods are all based on manuallydesigned, heuristic construction processes, making them hard to handle open/complex situations. In recent years, RNN models have achieved success in sequence-to-sequence problems due to its strong ability on both representation learning and prediction, e.g., in machine translation (Cho et al., 2014). A lot of Seq2Seq models have also been employed for semantic parsing (Xiao et al., 2016; Dong and Lapata, 2016; Jia and Liang, 2016), where a sentence is parsed by translating it to linearized logical form using RNN models. There is no need for high-quality lexicons, manually-built grammars, and hand-crafted features. These models are trained end-to-end, and can leverage attention mechanism (Bahdanau et al., 2014; Luong et al., 2015) to learn soft alignments between sentences and logical forms. In this paper, we propose a new neural semantic parsing framework – Sequence-to-Action, which can simultaneously leverage the advantages of semantic graph representation and the strong prediction ability of Seq2Seq models. Specifically, we model semantic parsing as an end-to-end semantic graph generation process. For example in Figure 1, our model will parse the sentence “Which states border Texas” by generating a sequence of actions [add variable:A, add type:state, ...]. To achieve the above goal, we first design an action set which can encode the generation process of semantic graph (including node actions such as add variable, add entity, add type, edge actions such as add edge, and operation actions such as argmin, argmax, count, sum, etc.). And then we design a RNN model which can generate the action sequence for constructing the semantic graph of a sentence. Finally we further enhance parsing by incorporating both structure and semantic constraints during decoding. Compared with the manually-designed, heuristic generation algorithms used in traditional semantic graph-based methods, our sequence-toaction method generates semantic graphs using a RNN model, which is learned end-to-end from training data. Such a learnable, end-to-end generation makes our approach more effective and can fit to different situations. Compared with the previous Seq2Seq semantic parsing methods, our sequence-to-action model predicts a sequence of semantic graph generation actions, rather than linearized logical forms. We find that the action sequence encoding can better capture structure and semantic information, and is more compact. And the parsing can be enhanced by exploiting structure and semantic constraints. For example, in GEO dataset, the action add edge:next to must subject to the semantic constraint that its arguments must be of type state and state, and the structure constraint that the edge next to must connect two nodes to form a valid graph. We evaluate our approach on three standard datasets: GEO (Zelle and Mooney, 1996), ATIS (He and Young, 2005) and OVERNIGHT (Wang et al., 2015b). The results show that our method achieves state-of-the-art performance on OVERNIGHT dataset and gets competitive performance on GEO and ATIS datasets. The main contributions of this paper are summarized as follows: • We propose a new semantic parsing framework – Sequence-to-Action, which models semantic parsing as an end-to-end semantic graph generation process. This new framework can synthesize the advantages of semantic graph representation and the prediction ability of Seq2Seq models. • We design a sequence-to-action model, including an action set encoding for semantic graph generation and a Seq2Seq RNN model for action sequence prediction. We further enhance the parsing by exploiting structure and semantic constraints during decoding. Experiments validate the effectiveness of our method. 2 Sequence-to-Action Model for End-to-End Semantic Graph Generation Given a sentence X = x1, ..., x|X|, our sequenceto-action model generates a sequence of actions Y = y1, ..., y|Y | for constructing the correct semantic graph. Figure 2 shows an example. The conditional probability P(Y |X) used in our 768 Sentence: Which river runs through the most states? Semantic Graph: Action Sequence: most arg_for_1 arg_for_2 A B state traverse type type river return Structure Semantic Arg add_operation most add_variable A add_type river A add_variable B add_type state B add_edge traverse A, B end_operation most A, B return A Figure 2: An example of a sentence paired with its semantic graph, together with the action sequence for semantic graph generation. model is decomposed as follows: P(Y |X) = |Y | Y t=1 P(yt|y<t, X) (1) where y<t = y1, ..., yt−1. To achieve the above goal, we need: 1) an action set which can encode semantic graph generation process; 2) an encoder which encodes natural language input X into a vector representation, and a decoder which generates y1, ..., y|Y | conditioned on the encoding vector. In following we describe them in detail. 2.1 Actions for Semantic Graph Generation Generally, a semantic graph consists of nodes (including variables, entities, types) and edges (semantic relations), with some universal operations (e.g., argmax, argmin, count, sum, and not). To generate a semantic graph, we define six types of actions as follows: Add Variable Node: This kind of actions denotes adding a variable node to semantic graph. In most cases a variable node is a return node (e.g., which, what), but can also be an intermediate variable node. We represent this kind of action as add variable:A, where A is the identifier of the variable node. Add Entity Node: This kind of actions denotes adding an entity node (e.g., Texas, New York) and is represented as add entity node:texas. An entity node corresponds to an entity in knowledge bases. Add Type Node: This kind of actions denotes adding a type node (e.g., state, city). We represent them as add type node:state. Add Edge: This kind of actions denotes adding an edge between two nodes. An edge is a binary relation in knowledge bases. This kind of actions is represented as add edge:next to. Operation Action: This kind of actions denotes adding an operation. An operation can be argmax, argmin, count, sum, not, et al. Because each operation has a scope, we define two actions for an operation, one is operation start action, represented as start operation:most, and the other is operation end action, represented as end operation:most. The subgraph within the start and end operation actions is its scope. Argument Action: Some above actions need argument information. For example, which nodes the add edge:next to action should connect to. In this paper, we design argument actions for add type, add edge and operation actions, and the argument actions should be put directly after its main action. For add type actions, we put an argument action to indicate which node this type node should constrain. The argument can be a variable node or an entity node. An argument action for a type node is represented as arg:A. For add edge action, we use two argument actions: arg1 node and arg2 node, and they are represented as arg1 node:A and arg2 node:B. We design argument actions for different operations. For operation:sum, there are three arguments: arg-for, arg-in and arg-return. For operation:count, they are arg-for and arg-return. There are two arg-for arguments for operation:most. We can see that each action encodes both structure and semantic information, which makes it easy to capture more information for parsing and can be tightly coupled with knowledge base. Furthermore, we find that action sequence encoding is more compact than linearized logical form (See Section 4.4 for more details). 769 b1 b2 bm x1 x2 xm attention s1 s2 sn y1 yn-1 softmax controller y1 y2 yn ... ... ... Figure 3: Our attention-based Sequence-to-Action RNN model, with a controller for incorporating constraints. 2.2 Neural Sequence-to-Action Model Based on the above action encoding mechanism, this section describes our encoder-decoder model for mapping sentence to action sequence. Specifically, similar to the RNN model in Jia and Liang (2016), this paper employs the attentionbased sequence-to-sequence RNN model. Figure 3 presents the overall structure. Encoder: The encoder converts the input sequence x1, ..., xm to a sequence of contextsensitive vectors b1, ..., bm using a bidirectional RNN (Bahdanau et al., 2014). Firstly each word xi is mapped to its embedding vector, then these vectors are fed into a forward RNN and a backward RNN. The sequence of hidden states h1, ..., hm are generated by recurrently applying the recurrence: hi = LSTM(φ(x)(xi), hi−1). (2) The recurrence takes the form of LSTM (Hochreiter and Schmidhuber, 1997). Finally, for each input position i, we define its context-sensitive embedding as bi = [hF i , hB i ]. Decoder: This paper uses the classical attentionbased decoder (Bahdanau et al., 2014), which generates action sequence y1, ..., yn, one action at a time. At each time step j, it writes yj based on the current hidden state sj, then updates the hidden state to sj+1 based on sj and yj. The decoder is formally defined by the following equations: s1 = tanh(W (s)[hF m, hB 1 ]) (3) eji = sT j W (a)bi (4) aji = exp(eji) Pm i′=1 exp(eji′) (5) cj = m X i=1 ajibi (6) P(yj = w|x, y1:j−1) ∝exp(Uw[sj, cj]) (7) sj+1 = LSTM([φ(y)(yj), cj], sj) (8) where the normalized attention scores aji defines the probability distribution over input words, indicating the attention probability on input word i at time j; eji is un-normalized attention score. To incorporate constraints during decoding, an extra controller component is added and its details will be described in Section 3.3. Action Embedding. The above decoder needs the embedding of each action. As described above, each action has two parts, one for structure (e.g., add edge), and the other for semantic (e.g., next to). As a result, actions may share the same structure or semantic part, e.g., add edge:next to and add edge:loc have the same structure part, and add node:A and arg node:A have the same semantic part. To make parameters more compact, we first embed the structure part and the semantic part independently, then concatenate them to get the final embedding. For instance, φ(y)(add edge:next to ) = [ φ(y) strut( add edge ), φ(y) sem( next to )]. The action embeddings φ(y) are learned during training. 3 Constrained Semantic Parsing using Sequence-to-Action Model In this section, we describe how to build a neural semantic parser using sequence-to-action model. We first describe the training and the inference of our model, and then introduce how to incorporate structure and semantic constraints during decoding. 3.1 Training Parameter Estimation. The parameters of our model include RNN parameters W (s), W (a), Uw, word embeddings φ(x), and action embeddings φ(y). We estimate these parameters from training data. Given a training example with a sentence X and its action sequence Y , we maximize the likelihood of the generated sequence of actions given X. The objective function is: n X i=1 log P(Yi|Xi) (9) Standard stochastic gradient descent algorithm is employed to update parameters. Logical Form to Action Sequence. Currently, most datasets of semantic parsing are labeled with logical forms. In order to train our model, we 770 compiler generator compiler generator Action Sequence Semantic Graph Logical Form Figure 4: The procedure of converting between logical form and action sequence. convert logical forms to action sequences using semantic graph as an intermediate representation (See Figure 4 for an overview). Concretely, we transform logical forms into semantic graphs using a depth-first-search algorithm from root, and then generate the action sequence using the same order. Specifically, entities, variables and types are nodes; relations are edges. Conversely we can convert action sequence to logical form similarly. Based on the above algorithm, action sequences can be transformed into logical forms in a deterministic way, and the same for logical forms to action sequences. Mechanisms for Handling Entities. Entities play an important role in semantic parsing (Yih et al., 2015). In Dong and Lapata (2016), entities are replaced with their types and unique IDs. In Jia and Liang (2016), entities are generated via attention-based copying mechanism helped with a lexicon. This paper implements both mechanisms and compares them in experiments. 3.2 Inference Given a new sentence X, we predict action sequence by: Y ∗= argmax Y P(Y |X) (10) where Y represents action sequence, and P(Y |X) is computed using Formula (1). Beam search is used for best action sequence decoding. Semantic graph and logical form can be derived from Y ∗as described in above. 3.3 Incorporating Constraints in Decoding For decoding, we generate action sequentially. It is obviously that the next action has a strong correlation with the partial semantic graph generated to current, and illegal actions can be filtered using structure and semantic constraints. Specifically, we incorporate constraints in decoding using a controller. This procedure has two steps: 1) the controller constructs partial semantic graph using the actions generated to current; 2) the controller checks whether a new generated action can meet Sentence: Which states border Texas? Partial Semantic Graph: A next_to type state texas:st Structure Semantic Arg Validity Generated Actions add_variable A add_type state A add_entity texas:st Candidate Next Action add_type city texas:st O add_edge loc A, texas:st O add_edge next_to A, A O add_edge next_to A, texas:st P … … … … Figure 5: A demonstration of illegal action filtering using constraints. The graph in color is the constructed semantic graph to current. all structure/semantic constraints using the partial semantic graph. Structure Constraints. The structure constraints ensure action sequence will form a connected acyclic graph. For example, there must be two argument nodes for an edge, and the two argument nodes should be different (The third candidate next action in Figure 5 violates this constraint). This kind of constraints are domain-independent. The controller encodes structure constraints as a set of rules. Semantic Constraints. The semantic constraints ensure the constructed graph must follow the schema of knowledge bases. Specifically, we model two types of semantic constraints. One is selectional preference constraints where the argument types of a relation should follow knowledge base schemas. For example, in GEO dataset, relation next to’s arg1 and arg2 should both be a state. The second is type conflict constraints, i.e., an entity/variable node’s type must be consistent, i.e., a node cannot be both of type city and state. Semantic constraints are domain-specific and are automatically extracted from knowledge base schemas. The controller encodes semantic constraints as a set of rules. 4 Experiments In this section, we assess the performance of our method and compare it with previous methods. 771 4.1 Datasets We conduct experiments on three standard datasets: GEO, ATIS and OVERNIGHT. GEO contains natural language questions about US geography paired with corresponding Prolog database queries. Following Zettlemoyer and Collins (2005), we use the standard 600/280 instance splits for training/test. ATIS contains natural language questions of a flight database, with each question is annotated with a lambda calculus query. Following Zettlemoyer and Collins (2007), we use the standard 4473/448 instance splits for training/test. OVERNIGHT contains natural language paraphrases paired with logical forms across eight domains. We evaluate on the standard train/test splits as Wang et al. (2015b). 4.2 Experimental Settings Following the experimental setup of Jia and Liang (2016): we use 200 hidden units and 100dimensional word vectors for sentence encoding. The dimensions of action embedding are tuned on validation datasets for each corpus. We initialize all parameters by uniformly sampling within the interval [-0.1, 0.1]. We train our model for a total of 30 epochs with an initial learning rate of 0.1, and halve the learning rate every 5 epochs after epoch 15. We replace word vectors for words occurring only once with an universal word vector. The beam size is set as 5. Our model is implemented in Theano (Bergstra et al., 2010), and the codes and settings are released on Github: https://github.com/dongpobeyond/Seq2Act. We evaluate different systems using the standard accuracy metric, and the accuracies on different datasets are obtained as same as Jia and Liang (2016). 4.3 Overall Results We compare our method with state-of-the-art systems on all three datasets. Because all systems using the same training/test splits, we directly use the reported best performances from their original papers for fair comparison. For our method, we train our model with three settings: the first one is the basic sequence-toaction model without constraints – Seq2Act; the second one adds structure constraints in decoding – Seq2Act (+C1); the third one is the full model which adds both structure and semantic GEO ATIS Previous Work Zettlemoyer and Collins (2005) 79.3 – Zettlemoyer and Collins (2007) 86.1 84.6 Kwiatkowksi et al. (2010) 88.9 – Kwiatkowski et al. (2011) 88.6 82.8 Liang et al. (2011)* (+lexicon) 91.1 – Poon (2013) – 83.5 Zhao et al. (2015) 88.9 84.2 Rabinovich et al. (2017) 87.1 85.9 Seq2Seq Models Jia and Liang (2016) 85.0 76.3 Jia and Liang (2016)* (+data) 89.3 83.3 Dong and Lapata (2016): 2Seq 84.6 84.2 Dong and Lapata (2016): 2Tree 87.1 84.6 Our Models Seq2Act 87.5 84.6 Seq2Act (+C1) 88.2 85.0 Seq2Act (+C1+C2) 88.9 85.5 Table 1: Test accuracies on GEO and ATIS datasets, where * indicates systems with extraresources are used. constraints – Seq2Act (+C1+C2). Semantic constraints (C2) are stricter than structure constraints (C1). Therefore we set that C1 should be first met for C2 to be met. So in our experiments we add constraints incrementally. The overall results are shown in Table 1-2. From the overall results, we can see that: 1) By synthetizing the advantages of semantic graph representation and the prediction ability of Seq2Seq model, our method achieves stateof-the-art performance on OVERNIGHT dataset, and gets competitive performance on GEO and ATIS dataset. In fact, on GEO our full model (Seq2Act+C1+C2) also gets the best test accuracy of 88.9 if under the same settings, which only falls behind Liang et al. (2011)* which uses extra handcrafted lexicons and Jia and Liang (2016)* which uses extra augmented training data. On ATIS our full model gets the second best test accuracy of 85.5, which only falls behind Rabinovich et al. (2017) which uses a supervised attention strategy. On OVERNIGHT, our full model gets state-of-theart accuracy of 79.0, which even outperforms Jia and Liang (2016)* with extra augmented training data. 2) Compared with the linearized logical form representation used in previous Seq2Seq baselines, our action sequence encoding is more effective for semantic parsing. On all three datasets, 772 Soc. Blo. Bas. Res. Cal. Hou. Pub. Rec. Avg. Previous Work Wang et al. (2015b) 48.2 41.9 46.3 75.9 74.4 54.0 59.0 70.8 58.8 Seq2Seq Models Xiao et al. (2016) 80.0 55.6 80.5 80.1 75.0 61.9 75.8 – 72.7 Jia and Liang (2016) 81.4 58.1 85.2 76.2 78.0 71.4 76.4 79.6 75.8 Jia and Liang (2016)* (+data) 79.6 60.2 87.5 79.5 81.0 72.5 78.3 81.0 77.5 Our Models Seq2Act 81.4 60.4 87.5 79.8 81.0 73.0 79.5 81.5 78.0 Seq2Act (+C1) 81.8 60.9 88.0 80.1 81.0 73.5 80.1 82.0 78.4 Seq2Act (+C1+C2) 82.1 61.4 88.2 80.7 81.5 74.1 80.7 82.9 79.0 Table 2: Test accuracies on OVERNIGHT dataset, which includes eight domains: Social, Blocks, Basketball, Restaurants, Calendar, Housing, Publications, and Recipes. our basic Seq2Act model gets better results than all Seq2Seq baselines. On GEO, the Seq2Act model achieve test accuracy of 87.5, better than the best accuracy 87.1 of Seq2Seq baseline. On ATIS, the Seq2Act model obtains a test accuracy of 84.6, the same as the best Seq2Seq baseline. On OVERNGIHT, the Seq2Act model gets a test accuracy of 78.0, better than the best Seq2Seq baseline gets 77.5. We argue that this is because our action sequence encoding is more compact and can capture more information. 3) Structure constraints can enhance semantic parsing by ensuring the validity of graph using the generated action sequence. In all three datasets, Seq2Act (+C1) outperforms the basic Seq2Act model. This is because a part of illegal actions will be filtered during decoding. 4) By leveraging knowledge base schemas during decoding, semantic constraints are effective for semantic parsing. Compared to Seq2Act and Seq2Act (+C1), the Seq2Act (+C1+C2) gets the best performance on all three datasets. This is because semantic constraints can further filter semantic illegal actions using selectional preference and consistency between types. 4.4 Detailed Analysis Effect of Entity Handling Mechanisms. This paper implements two entity handling mechanisms – Replacing (Dong and Lapata, 2016) which identifies entities and then replaces them with their types and IDs, and attention-based Copying (Jia and Liang, 2016). To compare the above two mechanisms, we train and test with our full model and the results are shown in Table 3. We can see that, Replacing mechanism outperforms Copying in all three datasets. This is because Replacing is done Replacing Copying GEO 88.9 88.2 ATIS 85.5 84.0 OVERNIGHT 79.0 77.9 Table 3: Test accuracies of Seq2Act (+C1+C2) on GEO, ATIS, and OVERNIGHT of two entity handling mechanisms. Logical Form Action Sequence GEO 28.2 18.2 ATIS 28.4 25.8 OVERNIGHT 46.6 33.3 Table 4: Average length of logical forms and action sequences on three datasets. On OVERNIGHT, we average across all eight domains. in preprocessing, while attention-based Copying is done during parsing and needs additional copy mechanism. Linearized Logical Form vs. Action Sequence. Table 4 shows the average length of linearized logical forms used in previous Seq2Seq models and the action sequences of our model on all three datasets. As we can see, action sequence encoding is more compact than linearized logical form encoding: action sequence is shorter on all three datasets, 35.5%, 9.2% and 28.5% reduction in length respectively. The main advantage of a shorter/compact encoding is that it will reduce the influence of long distance dependency problem. 4.5 Error Analysis We perform error analysis on results and find there are mainly two types of errors. Unseen/Informal Sentence Structure. Some test sentences have unseen syntactic structures. For example, the first case in Table 5 has an unseen 773 Error Types Examples Un-covered Sentence Structure Sentence: Iowa borders how many states? (Formal Form: How many states does Iowa border?) Gold Parse: answer(A, count(B, (const (C, stateid(iowa)), next to(C, B), state (B)), A)) Predicted Parse: answer (A, count(B, state(B), A)) UnderMapping Sentence: Please show me first class flights from indianapolis to memphis one way leaving before 10am Gold Parse: (lambda x (and (flight x) (oneway x) (class type x first:cl) (< (departure time x) 1000:ti) (from x indianapolis:ci) (to x memphis:ci))) Predicted Parse: (lambda x (and (flight x) (oneway x) (< (departure time x) 1000:ti) (from x indianapolis:ci) (to x memphis:ci))) Table 5: Some examples for error analysis. Each example includes the sentence for parsing, with gold parse and predicted parse from our model. and informal structure, where entity word “Iowa” and relation word “borders” appear ahead of the question words “how many”. For this problem, we can employ sentence rewriting or paraphrasing techniques (Chen et al., 2016; Dong et al., 2017) to transform unseen sentence structures into normal ones. Under-Mapping. As Dong and Lapata (2016) discussed, the attention model does not take the alignment history into consideration, makes some words are ignored during parsing. For example in the second case in Table 5, “first class” is ignored during the decoding process. This problem can be further solved using explicit word coverage models used in neural machine translation (Tu et al., 2016; Cohn et al., 2016) 5 Related Work Semantic parsing has received significant attention for a long time (Kate and Mooney, 2006; Clarke et al., 2010; Krishnamurthy and Mitchell, 2012; Artzi and Zettlemoyer, 2013; Berant and Liang, 2014; Quirk et al., 2015; Artzi et al., 2015; Reddy et al., 2017). Traditional methods are mostly based on the principle of compositional semantics, which first trigger predicates using lexicons and then compose them using grammars. The prominent grammars include SCFG (Wong and Mooney, 2007; Li et al., 2015), CCG (Zettlemoyer and Collins, 2005; Kwiatkowski et al., 2011; Cai and Yates, 2013), DCS (Liang et al., 2011; Berant et al., 2013), etc. As discussed above, the main drawback of grammar-based methods is that they rely on high-quality lexicons, manually-built grammars, and hand-crafted features. In recent years, one promising direction of semantic parsing is to use semantic graph as representation. Thus semantic parsing is modeled as a semantic graph generation process. Ge and Mooney (2009) build semantic graph by transforming syntactic tree. Bast and Haussmann (2015) identify the structure of a semantic query using three pre-defined patterns. Reddy et al. (2014, 2016) use Freebase-based semantic graph representation, and convert sentences to semantic graphs using CCG or dependency tree. Yih et al. (2015) generate semantic graphs using a staged heuristic search algorithm. These methods are all based on manually-designed, heuristic generation process, which may suffer from syntactic parse errors (Ge and Mooney, 2009; Reddy et al., 2014, 2016), structure mismatch (Chen et al., 2016), and are hard to deal with complex sentences (Yih et al., 2015). One other direction is to employ neural Seq2Seq models, which models semantic parsing as an end-to-end, sentence to logical form machine translation problem. Dong and Lapata (2016), Jia and Liang (2016) and Xiao et al. (2016) transform word sequence to linearized logical forms. One main drawback of these methods is that it is hard to capture and exploit structure and semantic constraints using linearized logical forms. Dong and Lapata (2016) propose a Seq2Tree model to capture the hierarchical structure of logical forms. It has been shown that structure and semantic constraints are effective for enhancing semantic parsing. Krishnamurthy et al. (2017) use type constraints to filter illegal tokens. Liang et al. (2017) adopt a Lisp interpreter with pre-defined functions to produce valid tokens. Iyyer et al. (2017) adopt type constraints to generate valid actions. Inspired by these approaches, we also incorporate both structure and semantic constraints in our neural sequence-to-action model. Transition-based approaches are important in both dependency parsing (Nivre, 2008; Henderson et al., 2013) and AMR parsing (Wang et al., 2015a). In semantic parsing, our method has a tight-coupling with knowledge bases, and con774 straints can be exploited for more accurate decoding. We believe this can also be used to enhance previous transition based methods and may also be used in other parsing tasks, e.g., AMR parsing. 6 Conclusions This paper proposes Sequence-to-Action, a method which models semantic parsing as an end-to-end semantic graph generation process. By leveraging the advantages of semantic graph representation and exploiting the representation learning and prediction ability of Seq2Seq models, our method achieved significant performance improvements on three datasets. Furthermore, structure and semantic constraints can be easily incorporated in decoding to enhance semantic parsing. For future work, to solve the problem of the lack of training data, we want to design weakly supervised learning algorithm using denotations (QA pairs) as supervision. Furthermore, we want to collect labeled data by designing an interactive UI for annotation assist like (Yih et al., 2016), which uses semantic graphs to annotate the meaning of sentences, since semantic graph is more natural and can be easily annotated without the need of expert knowledge. Acknowledgments This research work is supported by the National Key Research and Development Program of China under Grant No.2017YFB1002104; and the National Natural Science Foundation of China under Grants no. 61572477 and 61772505. Moreover, we sincerely thank the reviewers for their valuable comments. References Yoav Artzi, Kenton Lee, and Luke Zettlemoyer. 2015. Broad-coverage ccg semantic parsing with amr. In Proceedings of the 2015 Conference on Empirical Methods in Natural Language Processing. Association for Computational Linguistics, Lisbon, Portugal, pages 1699–1710. http://aclweb.org/anthology/D15-1198. Yoav Artzi and Luke Zettlemoyer. 2013. Weakly supervised learning of semantic parsers for mapping instructions to actions. Transactions of the Association for Computational Linguistics 1(1):49–62. Dzmitry Bahdanau, Kyunghyun Cho, and Yoshua Bengio. 2014. Neural machine translation by jointly learning to align and translate. CoRR abs/1409.0473. http://arxiv.org/abs/1409.0473. Hannah Bast and Elmar Haussmann. 2015. More accurate question answering on freebase. In Proceedings of the 24th ACM International on Conference on Information and Knowledge Management, CIKM 2015, Melbourne, VIC, Australia, October 19 - 23, 2015. pages 1431–1440. https://doi.org/10.1145/2806416.2806472. Jonathan Berant, Andrew Chou, Roy Frostig, and Percy Liang. 2013. Semantic parsing on Freebase from question-answer pairs. In Proceedings of the 2013 Conference on Empirical Methods in Natural Language Processing. Association for Computational Linguistics, Seattle, Washington, USA, pages 1533– 1544. http://www.aclweb.org/anthology/D13-1160. Jonathan Berant and Percy Liang. 2014. Semantic parsing via paraphrasing. In Proceedings of the 52nd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers). Association for Computational Linguistics, Baltimore, Maryland, pages 1415–1425. http://www.aclweb.org/anthology/P14-1133. James Bergstra, Olivier Breuleux, Fr´ed´eric Bastien, Pascal Lamblin, Razvan Pascanu, Guillaume Desjardins, Joseph Turian, David Warde-Farley, and Yoshua Bengio. 2010. Theano: A cpu and gpu math compiler in python. In Proc. 9th Python in Science Conf. pages 1–7. Qingqing Cai and Alexander Yates. 2013. Large-scale semantic parsing via schema matching and lexicon extension. In Proceedings of the 51st Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers). Association for Computational Linguistics, Sofia, Bulgaria, pages 423–433. http://www.aclweb.org/anthology/P13-1042. Bo Chen, Le Sun, Xianpei Han, and Bo An. 2016. Sentence rewriting for semantic parsing. In Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers). Association for Computational Linguistics, Berlin, Germany, pages 766–777. http://www.aclweb.org/anthology/P16-1073. Kyunghyun Cho, Bart van Merrienboer, Caglar Gulcehre, Dzmitry Bahdanau, Fethi Bougares, Holger Schwenk, and Yoshua Bengio. 2014. Learning phrase representations using rnn encoder–decoder for statistical machine translation. In Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing (EMNLP). Association for Computational Linguistics, Doha, Qatar, pages 1724–1734. http://www.aclweb.org/anthology/D141179. James Clarke, Dan Goldwasser, Ming-Wei Chang, and Dan Roth. 2010. Driving semantic parsing from the world’s response. In Proceedings of the Fourteenth Conference on Computational Natural Language Learning. Association for Computational Linguistics, Uppsala, Sweden, pages 18–27. http://www.aclweb.org/anthology/W10-2903. 775 Trevor Cohn, Cong Duy Vu Hoang, Ekaterina Vymolova, Kaisheng Yao, Chris Dyer, and Gholamreza Haffari. 2016. Incorporating structural alignment biases into an attentional neural translation model. In Proceedings of the 2016 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies. Association for Computational Linguistics, San Diego, California, pages 876–885. http://www.aclweb.org/anthology/N16-1102. Li Dong and Mirella Lapata. 2016. Language to logical form with neural attention. In Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers). Association for Computational Linguistics, Berlin, Germany, pages 33–43. http://www.aclweb.org/anthology/P16-1004. Li Dong, Jonathan Mallinson, Siva Reddy, and Mirella Lapata. 2017. Learning to paraphrase for question answering. In Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing. Association for Computational Linguistics, Copenhagen, Denmark, pages 875–886. https://www.aclweb.org/anthology/D17-1091. Ruifang Ge and Raymond Mooney. 2009. Learning a compositional semantic parser using an existing syntactic parser. In Proceedings of the Joint Conference of the 47th Annual Meeting of the ACL and the 4th International Joint Conference on Natural Language Processing of the AFNLP. Association for Computational Linguistics, Suntec, Singapore, pages 611–619. http://www.aclweb.org/anthology/P/P09/P09-1069. Yulan He and Steve Young. 2005. Semantic processing using the hidden vector state model. Computer Speech Language 19(1):85 – 106. https://doi.org/https://doi.org/10.1016/j.csl.2004.03.001. James Henderson, Paola Merlo, Ivan Titov, and Gabriele Musillo. 2013. Multilingual joint parsing of syntactic and semantic dependencies with a latent variable model. Comput. Linguist. 39(4):949–998. http://dx.doi.org/10.1162/COLIa00158. Sepp Hochreiter and J¨urgen Schmidhuber. 1997. Long short-term memory. Neural Comput. 9(8):1735– 1780. https://doi.org/10.1162/neco.1997.9.8.1735. Mohit Iyyer, Wen-tau Yih, and Ming-Wei Chang. 2017. Search-based neural structured learning for sequential question answering. In Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers). Association for Computational Linguistics, Vancouver, Canada, pages 1821–1831. http://aclweb.org/anthology/P17-1167. Robin Jia and Percy Liang. 2016. Data recombination for neural semantic parsing. In Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers). Association for Computational Linguistics, Berlin, Germany, pages 12–22. http://www.aclweb.org/anthology/P16-1002. Rohit J. Kate and Raymond J. Mooney. 2006. Using string-kernels for learning semantic parsers. In Proceedings of the 21st International Conference on Computational Linguistics and 44th Annual Meeting of the Association for Computational Linguistics. Association for Computational Linguistics, Sydney, Australia, pages 913–920. https://doi.org/10.3115/1220175.1220290. Jayant Krishnamurthy, Pradeep Dasigi, and Matt Gardner. 2017. Neural semantic parsing with type constraints for semi-structured tables. In Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing. Association for Computational Linguistics, Copenhagen, Denmark, pages 1516–1526. https://www.aclweb.org/anthology/D17-1160. Jayant Krishnamurthy and Tom Mitchell. 2012. Weakly supervised training of semantic parsers. In Proceedings of the 2012 Joint Conference on Empirical Methods in Natural Language Processing and Computational Natural Language Learning. Association for Computational Linguistics, Jeju Island, Korea, pages 754–765. http://www.aclweb.org/anthology/D12-1069. Tom Kwiatkowksi, Luke Zettlemoyer, Sharon Goldwater, and Mark Steedman. 2010. Inducing probabilistic CCG grammars from logical form with higher-order unification. In Proceedings of the 2010 Conference on Empirical Methods in Natural Language Processing. Association for Computational Linguistics, Cambridge, MA, pages 1223– 1233. http://www.aclweb.org/anthology/D10-1119. Tom Kwiatkowski, Eunsol Choi, Yoav Artzi, and Luke Zettlemoyer. 2013. Scaling semantic parsers with on-the-fly ontology matching. In Proceedings of the 2013 Conference on Empirical Methods in Natural Language Processing. Association for Computational Linguistics, Seattle, Washington, USA, pages 1545–1556. http://www.aclweb.org/anthology/D131161. Tom Kwiatkowski, Luke Zettlemoyer, Sharon Goldwater, and Mark Steedman. 2011. Lexical generalization in ccg grammar induction for semantic parsing. In Proceedings of the 2011 Conference on Empirical Methods in Natural Language Processing. Association for Computational Linguistics, Edinburgh, Scotland, UK., pages 1512–1523. http://www.aclweb.org/anthology/D11-1140. Junhui Li, Muhua Zhu, Wei Lu, and Guodong Zhou. 2015. Improving semantic parsing with enriched synchronous context-free grammar. In Proceedings of the 2015 Conference on Empirical Methods in Natural Language Processing. Association for Computational Linguistics, Lisbon, Portugal, pages 1455–1465. http://aclweb.org/anthology/D15-1170. 776 Chen Liang, Jonathan Berant, Quoc Le, Kenneth D. Forbus, and Ni Lao. 2017. Neural symbolic machines: Learning semantic parsers on freebase with weak supervision. In Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers). Association for Computational Linguistics, Vancouver, Canada, pages 23–33. http://aclweb.org/anthology/P171003. Percy Liang, Michael Jordan, and Dan Klein. 2011. Learning dependency-based compositional semantics. In Proceedings of the 49th Annual Meeting of the Association for Computational Linguistics: Human Language Technologies. Association for Computational Linguistics, Portland, Oregon, USA, pages 590–599. http://www.aclweb.org/anthology/P11-1060. Wei Lu, Hwee Tou Ng, Wee Sun Lee, and Luke S. Zettlemoyer. 2008. A generative model for parsing natural language to meaning representations. In Proceedings of the 2008 Conference on Empirical Methods in Natural Language Processing. Association for Computational Linguistics, Honolulu, Hawaii, pages 783–792. http://www.aclweb.org/anthology/D08-1082. Thang Luong, Hieu Pham, and Christopher D. Manning. 2015. Effective approaches to attention-based neural machine translation. In Proceedings of the 2015 Conference on Empirical Methods in Natural Language Processing. Association for Computational Linguistics, Lisbon, Portugal, pages 1412– 1421. http://aclweb.org/anthology/D15-1166. Joakim Nivre. 2008. Algorithms for deterministic incremental dependency parsing. Comput. Linguist. 34(4):513–553. http://dx.doi.org/10.1162/coli.07056-R1-07-027. Hoifung Poon. 2013. Grounded unsupervised semantic parsing. In Proceedings of the 51st Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers). Association for Computational Linguistics, Sofia, Bulgaria, pages 933–943. http://www.aclweb.org/anthology/P13-1092. Chris Quirk, Raymond Mooney, and Michel Galley. 2015. Language to code: Learning semantic parsers for if-this-then-that recipes. In Proceedings of the 53rd Annual Meeting of the Association for Computational Linguistics and the 7th International Joint Conference on Natural Language Processing (Volume 1: Long Papers). Association for Computational Linguistics, Beijing, China, pages 878–888. http://www.aclweb.org/anthology/P15-1085. Maxim Rabinovich, Mitchell Stern, and Dan Klein. 2017. Abstract syntax networks for code generation and semantic parsing. In Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers). Association for Computational Linguistics, Vancouver, Canada, pages 1139–1149. http://aclweb.org/anthology/P17-1105. Siva Reddy, Mirella Lapata, and Mark Steedman. 2014. Large-scale semantic parsing without question-answer pairs. Transactions of the Association for Computational Linguistics 2:377–392. http://aclweb.org/anthology/Q14-1030. Siva Reddy, Oscar T¨ackstr¨om, Michael Collins, Tom Kwiatkowski, Dipanjan Das, Mark Steedman, and Mirella Lapata. 2016. Transforming Dependency Structures to Logical Forms for Semantic Parsing. Transactions of the Association for Computational Linguistics 4:127–140. http://sivareddy.in/papers/reddy2016transforming.pdf. Siva Reddy, Oscar T¨ackstr¨om, Slav Petrov, Mark Steedman, and Mirella Lapata. 2017. Universal semantic parsing. In Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing. Association for Computational Linguistics, Copenhagen, Denmark, pages 89–101. https://www.aclweb.org/anthology/D17-1009. Zhaopeng Tu, Zhengdong Lu, Yang Liu, Xiaohua Liu, and Hang Li. 2016. Modeling coverage for neural machine translation. In Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers). Association for Computational Linguistics, Berlin, Germany, pages 76–85. http://www.aclweb.org/anthology/P16-1008. Chuan Wang, Nianwen Xue, and Sameer Pradhan. 2015a. A transition-based algorithm for amr parsing. In Proceedings of the 2015 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies. Association for Computational Linguistics, Denver, Colorado, pages 366– 375. http://www.aclweb.org/anthology/N15-1040. Yushi Wang, Jonathan Berant, and Percy Liang. 2015b. Building a semantic parser overnight. In Proceedings of the 53rd Annual Meeting of the Association for Computational Linguistics and the 7th International Joint Conference on Natural Language Processing (Volume 1: Long Papers). Association for Computational Linguistics, Beijing, China, pages 1332–1342. http://www.aclweb.org/anthology/P151129. Yuk Wah Wong and Raymond Mooney. 2007. Learning synchronous grammars for semantic parsing with lambda calculus. In Proceedings of the 45th Annual Meeting of the Association of Computational Linguistics. Association for Computational Linguistics, Prague, Czech Republic, pages 960– 967. http://www.aclweb.org/anthology/P07-1121. Chunyang Xiao, Marc Dymetman, and Claire Gardent. 2016. Sequence-based structured prediction for semantic parsing. In Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers). Association for Computational Linguistics, Berlin, Germany, pages 1341–1350. http://www.aclweb.org/anthology/P161127. 777 Wen-tau Yih, Ming-Wei Chang, Xiaodong He, and Jianfeng Gao. 2015. Semantic parsing via staged query graph generation: Question answering with knowledge base. In Proceedings of the 53rd Annual Meeting of the Association for Computational Linguistics and the 7th International Joint Conference on Natural Language Processing (Volume 1: Long Papers). Association for Computational Linguistics, Beijing, China, pages 1321– 1331. http://www.aclweb.org/anthology/P15-1128. Wen-tau Yih, Matthew Richardson, Chris Meek, MingWei Chang, and Jina Suh. 2016. The value of semantic parse labeling for knowledge base question answering. In Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics (Volume 2: Short Papers). Association for Computational Linguistics, Berlin, Germany, pages 201–206. http://anthology.aclweb.org/P16-2033. John M. Zelle and Raymond J. Mooney. 1996. Learning to parse database queries using inductive logic programming. In AAAI/IAAI. AAAI Press/MIT Press, Portland, OR, pages 1050–1055. http://www.cs.utexas.edu/users/ai-lab/?zelle:aaai96. Luke Zettlemoyer and Michael Collins. 2007. Online learning of relaxed CCG grammars for parsing to logical form. In Proceedings of the 2007 Joint Conference on Empirical Methods in Natural Language Processing and Computational Natural Language Learning (EMNLPCoNLL). Association for Computational Linguistics, Prague, Czech Republic, pages 678–687. http://www.aclweb.org/anthology/D/D07/D071071. Luke S. Zettlemoyer and Michael Collins. 2005. Learning to map sentences to logical form: Structured classification with probabilistic categorial grammars. In UAI ’05, Proceedings of the 21st Conference in Uncertainty in Artificial Intelligence, Edinburgh, Scotland, July 26-29, 2005. pages 658–666. Kai Zhao, Hany Hassan, and Michael Auli. 2015. Learning translation models from monolingual continuous representations. In Proceedings of the 2015 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies. Association for Computational Linguistics, Denver, Colorado, pages 1527– 1536. http://www.aclweb.org/anthology/N15-1176.
2018
71
Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Long Papers), pages 778–788 Melbourne, Australia, July 15 - 20, 2018. c⃝2018 Association for Computational Linguistics 778 On the Limitations of Unsupervised Bilingual Dictionary Induction Anders Søgaard♥ Sebastian Ruder♠♣ Ivan Vuli´c3 ♥University of Copenhagen, Copenhagen, Denmark ♠Insight Research Centre, National University of Ireland, Galway, Ireland ♣Aylien Ltd., Dublin, Ireland 3Language Technology Lab, University of Cambridge, UK [email protected],[email protected],[email protected] Abstract Unsupervised machine translation—i.e., not assuming any cross-lingual supervision signal, whether a dictionary, translations, or comparable corpora—seems impossible, but nevertheless, Lample et al. (2018a) recently proposed a fully unsupervised machine translation (MT) model. The model relies heavily on an adversarial, unsupervised alignment of word embedding spaces for bilingual dictionary induction (Conneau et al., 2018), which we examine here. Our results identify the limitations of current unsupervised MT: unsupervised bilingual dictionary induction performs much worse on morphologically rich languages that are not dependent marking, when monolingual corpora from different domains or different embedding algorithms are used. We show that a simple trick, exploiting a weak supervision signal from identical words, enables more robust induction, and establish a near-perfect correlation between unsupervised bilingual dictionary induction performance and a previously unexplored graph similarity metric. 1 Introduction Cross-lingual word representations enable us to reason about word meaning in multilingual contexts and facilitate cross-lingual transfer (Ruder et al., 2018). Early cross-lingual word embedding models relied on large amounts of parallel data (Klementiev et al., 2012; Mikolov et al., 2013a), but more recent approaches have tried to minimize the amount of supervision necessary (Vuli´c and Korhonen, 2016; Levy et al., 2017; Artetxe et al., 2017). Some researchers have even presented unsupervised methods that do not rely on any form of cross-lingual supervision at all (Barone, 2016; Conneau et al., 2018; Zhang et al., 2017). Unsupervised cross-lingual word embeddings hold promise to induce bilingual lexicons and machine translation models in the absence of dictionaries and translations (Barone, 2016; Zhang et al., 2017; Lample et al., 2018a), and would therefore be a major step toward machine translation to, from, or even between low-resource languages. Unsupervised approaches to learning crosslingual word embeddings are based on the assumption that monolingual word embedding graphs are approximately isomorphic, that is, after removing a small set of vertices (words) (Mikolov et al., 2013b; Barone, 2016; Zhang et al., 2017; Conneau et al., 2018). In the words of Barone (2016): ...we hypothesize that, if languages are used to convey thematically similar information in similar contexts, these random processes should be approximately isomorphic between languages, and that this isomorphism can be learned from the statistics of the realizations of these processes, the monolingual corpora, in principle without any form of explicit alignment. Our results indicate this assumption is not true in general, and that approaches based on this assumption have important limitations. Contributions We focus on the recent stateof-the-art unsupervised model of Conneau et al. (2018).1 Our contributions are: (a) In §2, we show that the monolingual word embeddings used in Conneau et al. (2018) are not approximately isomorphic, using the VF2 algorithm (Cordella et al., 2001) and we therefore introduce a metric for quantifying the similarity of word embeddings, based on Laplacian eigenvalues. (b) In §3, we identify circumstances under which the unsupervised bilingual 1Our motivation for this is that Artetxe et al. (2017) use small dictionary seeds for supervision, and Barone (2016) seems to obtain worse performance than Conneau et al. (2018). Our results should extend to Barone (2016) and Zhang et al. (2017), which rely on very similar methodology. 779 (a) Top 10 most frequent English words (b) German translations of top 10 most frequent English words (c) Top 10 most frequent English nouns (d) German translations of top 10 most frequent English nouns Figure 1: Nearest neighbor graphs. dictionary induction (BDI) algorithm proposed in Conneau et al. (2018) does not lead to good performance. (c) We show that a simple trick, exploiting a weak supervision signal from words that are identical across languages, makes the algorithm much more robust. Our main finding is that the performance of unsupervised BDI depends heavily on all three factors: the language pair, the comparability of the monolingual corpora, and the parameters of the word embedding algorithms. 2 How similar are embeddings across languages? As mentioned, recent work focused on unsupervised BDI assumes that monolingual word embedding spaces (or at least the subgraphs formed by the most frequent words) are approximately isomorphic. In this section, we show, by investigating the nearest neighbor graphs of word embedding spaces, that word embeddings are far from isomorphic. We therefore introduce a method for computing the similarity of non-isomorphic graphs. In §4.7, we correlate our similarity metric with performance on unsupervised BDI. Isomorphism To motivate our study, we first establish that word embeddings are far from graph isomorphic2—even for two closely re2Two graphs that contain the same number of graph vertices connected in the same way are said to be isomorphic. In the context of weighted graphs such as word embeddings, a lated languages, English and German, and using embeddings induced from comparable corpora (Wikipedia) with the same hyper-parameters. If we take the top k most frequent words in English, and the top k most frequent words in German, and build nearest neighbor graphs for English and German using the monolingual word embeddings used in Conneau et al. (2018), the graphs are of course very different. This is, among other things, due to German case and the fact that the translates into der, die, and das, but unsupervised alignment does not have access to this kind of information. Even if we consider the top k most frequent English words and their translations into German, the nearest neighbor graphs are not isomorphic. Figure 1a-b shows the nearest neighbor graphs of the top 10 most frequent English words on Wikipedia, and their German translations. Word embeddings are particularly good at capturing relations between nouns, but even if we consider the top k most frequent English nouns and their translations, the graphs are not isomorphic; see Figure 1c-d. We take this as evidence that word embeddings are not approximately isomorphic across languages. We also ran graph isomorphism checks on 10 random samples of frequent English nouns and their translations into Spanish, and only in 1/10 of the samples were the corresponding nearest neighbor graphs isomorphic. Eigenvector similarity Since the nearest neighbor graphs are not isomorphic, even for frequent translation pairs in neighboring languages, we want to quantify the potential for unsupervised BDI using a metric that captures varying degrees of graph similarity. Eigenvalues are compact representations of global properties of graphs, and we introduce a spectral metric based on Laplacian eigenvalues (Shigehalli and Shettar, 2011) that quantifies the extent to which the nearest neighbor graphs are isospectral. Note that (approximately) isospectral graphs need not be (approximately) isomorphic, but (approximately) isomorphic graphs are always (approximately) isospectral (Gordon et al., 1992). Let A1 and A2 be the adjacency matrices of the nearest neighbor graphs G1 and G2 of our two word embeddings, respectively. Let L1 = D1 −A1 and L2 = D2 −A2 be the Laplacians of the nearest neighbor graphs, where D1 and D2 are the corresponding diagonal matrices of degrees. We now weak version of this is to require that the underlying nearest neighbor graphs for the most frequent k words are isomorphic. 780 compute the eigensimilarity of the Laplacians of the nearest neighbor graphs, L1 and L2. For each graph, we find the smallest k such that the sum of the k largest Laplacian eigenvalues is <90% of the Laplacian eigenvalues. We take the smallest k of the two, and use the sum of the squared differences between the largest k Laplacian eigenvalues ∆as our similarity metric. ∆= k X i=1 (λ1i −λ2i)2 where k is chosen s.t. min j { Pk i=1 λji Pn i=1 λji > 0.9} Note that ∆= 0 means the graphs are isospectral, and the metric goes to infinite. Thus, the higher ∆is, the less similar the graphs (i.e., their Laplacian spectra). We discuss the correlation between unsupervised BDI performance and approximate isospectrality or eigenvector similarity in §4.7. 3 Unsupervised cross-lingual learning 3.1 Learning scenarios Unsupervised neural machine translation relies on BDI using cross-lingual embeddings (Lample et al., 2018a; Artetxe et al., 2018), which in turn relies on the assumption that word embedding graphs are approximately isomorphic. The work of Conneau et al. (2018), which we focus on here, also makes several implicit assumptions that may or may not be necessary to achieve such isomorphism, and which may or may not scale to low-resource languages. The algorithms are not intended to be limited to learning scenarios where these assumptions hold, but since they do in the reported experiments, it is important to see to what extent these assumptions are necessary for the algorithms to produce useful embeddings or dictionaries. We focus on the work of Conneau et al. (2018), who present a fully unsupervised approach to aligning monolingual word embeddings, induced using fastText (Bojanowski et al., 2017). We describe the learning algorithm in §3.2. Conneau et al. (2018) consider a specific set of learning scenarios: (a) The authors work with the following languages: English-{French, German, Chinese, Russian, Spanish}. These languages, except French, are dependent marking (Table 1).3 We evaluate Conneau et al. (2018) on (English to) Estonian (ET), Finnish (FI), Greek (EL), Hungarian (HU), Polish (PL), and Turkish (TR) in §4.2, to test whether the selection of languages in the original study introduces a bias. (b) The monolingual corpora in their experiments are comparable; Wikipedia corpora are used, except for an experiment in which they include Google Gigawords. We evaluate across different domains, i.e., on all combinations of Wikipedia, EuroParl, and the EMEA medical corpus, in §4.3. We believe such scenarios are more realistic for low-resource languages. (c) The monolingual embedding models are induced using the same algorithms with the same hyper-parameters. We evaluate Conneau et al. (2018) on pairs of embeddings induced with different hyper-parameters in §4.4. While keeping hyper-parameters fixed is always possible, it is of practical interest to know whether the unsupervised methods work on any set of pre-trained word embeddings. We also investigate the sensitivity of unsupervised BDI to the dimensionality of the monolingual word embeddings in §4.5. The motivation for this is that dimensionality reduction will alter the geometric shape and remove characteristics of the embedding graphs that are important for alignment; but on the other hand, lower dimensionality introduces regularization, which will make the graphs more similar. Finally, in §4.6, we investigate the impact of different types of query test words on performance, including how performance varies across part-of-speech word classes and on shared vocabulary items. 3.2 Summary of Conneau et al. (2018) We now introduce the method of Conneau et al. (2018).4 The approach builds on existing work on learning a mapping between monolingual word embeddings (Mikolov et al., 2013b; Xing et al., 2015) and consists of the following steps: 1) Monolingual word embeddings: An off-the-shelf word embedding algorithm (Bojanowski et al., 2017) is used to learn source and target language spaces X 3A dependent-marking language marks agreement and case more commonly on dependents than on heads. 4https://github.com/facebookresearch/ MUSE 781 and Y . 2) Adversarial mapping: A translation matrix W is learned between the spaces X and Y using adversarial techniques (Ganin et al., 2016). A discriminator is trained to discriminate samples from the translated source space WX from the target space Y , while W is trained to prevent this. This, again, is motivated by the assumption that source and target language word embeddings are approximately isomorphic. 3) Refinement (Procrustes analysis): W is used to build a small bilingual dictionary of frequent words, which is pruned such that only bidirectional translations are kept (Vuli´c and Korhonen, 2016). A new translation matrix W that translates between the spaces X and Y of these frequent word pairs is then induced by solving the Orthogonal Procrustes problem: W ∗= argminW ∥WX −Y ∥F = UV ⊤ s.t. UΣV ⊤= SVD(Y X⊤) (1) This step can be used iteratively by using the new matrix W to create new seed translation pairs. It requires frequent words to serve as reliable anchors for learning a translation matrix. In the experiments in Conneau et al. (2018), as well as in ours, the iterative Procrustes refinement improves performance across the board. 4) Cross-domain similarity local scaling (CSLS) is used to expand high-density areas and condense low-density ones, for more accurate nearest neighbor calculation, CSLS reduces the hubness problem in high-dimensional spaces (Radovanovi´c et al., 2010; Dinu et al., 2015). It relies on the mean similarity of a source language embedding x to its K target language nearest neighbours (K = 10 suggested) nn1, . . . , nnK: mnnT (x) = 1 K K X i=1 cos(x, nni) (2) where cos is the cosine similarity. mnnS(y) is defined in an analogous manner for any target language embedding y. CSLS(x, y) is then calculated as follows: 2cos(x, y) −mnnT (x) −mnnS(y) (3) 3.3 A simple supervised method Instead of learning cross-lingual embeddings completely without supervision, we can extract inexpensive supervision signals by harvesting identically spelled words as in, e.g. (Artetxe et al., 2017; Smith et al., 2017). Specifically, we use identically spelled words that occur in the vocabularies of both languages as bilingual seeds, without employing any additional transliteration or lemmatization/normalization methods. Using this seed dictionary, we then run the refinement step using Procrustes analysis of Conneau et al. (2018). 4 Experiments In the following experiments, we investigate the robustness of unsupervised cross-lingual word embedding learning, varying the language pairs, monolingual corpora, hyper-parameters, etc., to obtain a better understanding of when and why unsupervised BDI works. Task: Bilingual dictionary induction After the shared cross-lingual space is induced, given a list of N source language words xu,1, . . . , xu,N, the task is to find a target language word t for each query word xu relying on the representations in the space. ti is the target language word closest to the source language word xu,i in the induced cross-lingual space, also known as the cross-lingual nearest neighbor. The set of learned N (xu,i, ti) pairs is then run against a gold standard dictionary. We use bilingual dictionaries compiled by Conneau et al. (2018) as gold standard, and adopt their evaluation procedure: each test set in each language consists of 1500 gold translation pairs. We rely on CSLS for retrieving the nearest neighbors, as it consistently outperformed the cosine similarity in all our experiments. Following a standard evaluation practice (Vuli´c and Moens, 2013; Mikolov et al., 2013b; Conneau et al., 2018), we report Precision at 1 scores (P@1): how many times one of the correct translations of a source word w is retrieved as the nearest neighbor of w in the target language. 4.1 Experimental setup Our default experimental setup closely follows the setup of Conneau et al. (2018). For each language we induce monolingual word embeddings for all languages from their respective tokenized and lowercased Polyglot Wikipedias (Al-Rfou et al., 2013) using fastText (Bojanowski et al., 2017). Only words with more than 5 occurrences are retained for training. Our fastText setup relies on skip-gram with negative sampling (Mikolov et al., 2013a) with standard hyper-parameters: bag-of-words contexts with the window size 2, 15 negative samples, subsampling rate 10−4, and character n-gram length 782 Marking Type # Cases English (EN) dependent isolating None French (FR) mixed fusional None German (DE) dependent fusional 4 Chinese (ZH) dependent isolating None Russian (RU) dependent fusional 6–7 Spanish (ES) dependent fusional None Estonian (ET) mixed agglutinative 10+ Finnish (FI) mixed agglutinative 10+ Greek (EL) double fusional 3 Hungarian (HU) dependent agglutinative 10+ Polish (PL) dependent fusional 6–7 Turkish (TR) dependent agglutinative 6–7 Table 1: Languages in Conneau et al. (2018) and in our experiments (lower half) Unsupervised Supervised Similarity (Adversarial) (Identical) (Eigenvectors) EN-ES 81.89 82.62 2.07 EN-ET 00.00 31.45 6.61 EN-FI 00.09 28.01 7.33 EN-EL 00.07 42.96 5.01 EN-HU 45.06 46.56 3.27 EN-PL 46.83 52.63 2.56 EN-TR 32.71 39.22 3.14 ET-FI 29.62 24.35 3.98 Table 2: Bilingual dictionary induction scores (P@1×100%) using a) the unsupervised method with adversarial training; b) the supervised method with a bilingual seed dictionary consisting of identical words (shared between the two languages). The third columns lists eigenvector similarities between 10 randomly sampled source language nearest neighbor subgraphs of 10 nodes and the subgraphs of their translations, all from the benchmark dictionaries in Conneau et al. (2018). 3-6. All embeddings are 300-dimensional. As we analyze the impact of various modeling assumptions in the following sections (e.g., domain differences, algorithm choices, hyper-parameters), we also train monolingual word embeddings using other corpora and different hyper-parameter choices. Quick summaries of each experimental setup are provided in the respective subsections. 4.2 Impact of language similarity Conneau et al. (2018) present results for several target languages: Spanish, French, German, Russian, Chinese, and Esperanto. All languages but Esperanto are isolating or exclusively concatenating languages from a morphological point of view. All languages but French are dependent-marking. Table 1 lists three important morphological properties of the languages involved in their/our experiments. Agglutinative languages with mixed or double marking show more morphological variance with content words, and we speculate whether unsupervised BDI is challenged by this kind of morphological complexity. To evaluate this, we experiment with Estonian and Finnish, and we include Greek, Hungarian, Polish, and Turkish to see how their approach fares on combinations of these two morphological traits. We show results in the left column of Table 2. The results are quite dramatic. The approach achieves impressive performance for Spanish, one of the languages Conneau et al. (2018) include in their paper. For the languages we add here, performance is less impressive. For the languages with dependent marking (Hungarian, Polish, and Turkish), P@1 scores are still reasonable, with Turkish being slightly lower (0.327) than the others. However, for Estonian and Finnish, the method fails completely. Only in less than 1/1000 cases does a nearest neighbor search in the induced embeddings return a correct translation of a query word.5 The sizes of Wikipedias naturally vary across languages: e.g., fastText trains on approximately 16M sentences and 363M word tokens for Spanish, while it trains on 1M sentences and 12M words for Finnish. However, the difference in performance cannot be explained by the difference in training data sizes. To verify that near-zero performance in Finnish is not a result of insufficient training data, we have conducted another experiment using the large Finnish WaC corpus (Ljubeši´c et al., 2016) containing 1.7B words in total (this is similar in size to the English Polyglot Wikipedia). However, even with this large Finnish corpus, the model does not induce anything useful: P@1 equals 0.0. We note that while languages with mixed marking may be harder to align, it seems unsupervised BDI is possible between similar, mixed marking languages. So while unsupervised learning fails for English-Finnish and English-Estonian, performance is reasonable and stable for the more similar Estonian-Finnish pair (Table 2). In general, unsupervised BDI, using the approach in Conneau et al. (2018), seems challenged when pairing En5We note, though, that varying our random seed, performance for Estonian, Finnish, and Greek is sometimes (approximately 1 out of 10 runs) on par with Turkish. Detecting main causes and remedies for the inherent instability of adversarial training is one the most important avenues for future research. 783 glish with languages that are not isolating and do not have dependent marking.6 The promise of zero-supervision models is that we can learn cross-lingual embeddings even for low-resource languages. On the other hand, a similar distribution of embeddings requires languages to be similar. This raises the question whether we need fully unsupervised methods at all. In fact, our supervised method that relies on very naive supervision in the form of identically spelled words leads to competitive performance for similar language pairs and better results for dissimilar pairs. The fact that we can reach competitive and more robust performance with such a simple heuristic questions the true applicability of fully unsupervised approaches and suggests that it might often be better to rely on available weak supervision. 4.3 Impact of domain differences Monolingual word embeddings used in Conneau et al. (2018) are induced from Wikipedia, a nearparallel corpus. In order to assess the sensitivity of unsupervised BDI to the comparability and domain similarity of the monolingual corpora, we replicate the experiments in Conneau et al. (2018) using combinations of word embeddings extracted from three different domains: 1) parliamentary proceedings from EuroParl.v7 (Koehn, 2005), 2) Wikipedia (Al-Rfou et al., 2013), and 3) the EMEA corpus in the medical domain (Tiedemann, 2009). We report experiments with three language pairs: English{Spanish, Finnish, Hungarian}. To control for the corpus size, we restrict each corpus in each language to 1.1M sentences in total (i.e., the number of sentences in the smallest, EMEA corpus). 300-dim fastText vectors are induced as in §4.1, retaining all words with more than 5 occurrences in the training data. For each pair of monolingual corpora, we compute their domain (dis)similarity by calculating the Jensen-Shannon divergence (El-Gamal, 1991), based on term distributions.7 The domain similarities are displayed in Figures 2a–c.8 We show the results of unsupervised BDI in Figures 2g–i. For Spanish, we see good performance in all three cases where the English and Spanish 6One exception here is French, which they include in their paper, but French arguably has a relatively simple morphology. 7In order to get comparable term distributions, we translate the source language to the target language using the bilingual dictionaries provided by Conneau et al. (2018). 8We also computed A-distances (Blitzer et al., 2007) and confirmed that trends were similar. corpora are from the same domain. When the two corpora are from different domains, performance is close to zero. For Finnish and Hungarian, performance is always poor, suggesting that more data is needed, even when domains are similar. This is in sharp contrast with the results of our minimally supervised approach (Figures 2d–f) based on identical words, which achieves decent performance in many set-ups. We also observe a strong decrease in P@1 for English-Spanish (from 81.19% to 46.52%) when using the smaller Wikipedia corpora. This result indicates the importance of procuring large monolingual corpora from similar domains in order to enable unsupervised dictionary induction. However, resource-lean languages, for which the unsupervised method was designed in the first place, cannot be guaranteed to have as large monolingual training corpora as available for English, Spanish or other major resource-rich languages. 4.4 Impact of hyper-parameters Conneau et al. (2018) use the same hyperparameters for inducing embeddings for all languages. This is of course always practically possible, but we are interested in seeing whether their approach works on pre-trained embeddings induced with possibly very different hyper-parameters. We focus on two hyper-parameters: context windowsize (win) and the parameter controlling the number of n-gram features in the fastText model (chn), while at the same time varying the underlying algorithm: skip-gram vs. cbow. The results for EnglishSpanish are listed in Table 3. The small variations in the hyper-parameters with the same underlying algorithm (i.e., using skipgram or cbow for both EN and ES) yield only slight drops in the final scores. Still, the best scores are obtained with the same configuration on both sides. Our main finding here is that unsupervised BDI fails (even) for EN-ES when the two monolingual embedding spaces are induced by two different algorithms (see the results of the entire Spanish cbow column).9 In sum, this means that the unsupervised approach is unlikely to work on pre-trained word embeddings unless they are induced on same9We also checked if this result might be due to a lowerquality monolingual ES space. However, monolingual word similarity scores on available datasets in Spanish show performance comparable to that of Spanish skip-gram vectors: e.g., Spearman’s ρ correlation is ≈0.7 on the ES evaluation set from SemEval-2017 Task 2 (Camacho-Collados et al., 2017). 784 EN:EP EN:Wiki EN:EMEA Training Corpus (English) 0.50 0.55 0.60 0.65 0.70 0.75 Jensen-Shannon Similarity EP Wiki EMEA (a) en-es: domain similarity EN:EP EN:Wiki EN:EMEA Training Corpus (English) 0.50 0.55 0.60 0.65 0.70 0.75 Jensen-Shannon Similarity EP Wiki EMEA (b) en-fi: domain similarity EN:EP EN:Wiki EN:EMEA Training Corpus (English) 0.50 0.55 0.60 0.65 0.70 0.75 Jensen-Shannon Similarity EP Wiki EMEA (c) en-hu: domain similarity EN:EP EN:Wiki EN:EMEA Training Corpus (English) 0 10 20 30 40 50 60 BLI: P@1 64.09 25.48 4.84 25.17 46.52 6.63 9.42 9.63 49.24 (d) en-es: identical words EN:EP EN:Wiki EN:EMEA Training Corpus (English) 0 10 20 30 40 50 60 BLI: P@1 28.63 10.14 2.31 5.84 11.08 2.27 4.97 5.96 8.11 (e) en-fi: identical words EN:EP EN:Wiki EN:EMEA Training Corpus (English) 0 10 20 30 40 50 60 BLI: P@1 26.99 9.22 2.26 7.07 14.79 1.58 3.74 3.45 15.56 (f) en-hu: identical words EN:EP EN:Wiki EN:EMEA Training Corpus (English) 0 10 20 30 40 50 60 BLI: P@1 61.01 0.13 0.0 0.11 41.38 0.0 0.0 0.08 49.43 (g) en-es: fully unsupervised BLI EN:EP EN:Wiki EN:EMEA Training Corpus (English) 0 10 20 30 40 50 60 BLI: P@1 0.72 0.10 0.0 0.0 0.0 0.16 0.0 0.0 0.96 (h) en-fi: fully unsupervised BLI EN:EP EN:Wiki EN:EMEA Training Corpus (English) 0 10 20 30 40 50 60 BLI: P@1 0.24 0.11 0.0 0.11 6.68 0.0 0.0 0.0 0.45 (i) en-hu: fully unsupervised BLI Figure 2: Influence of language-pair and domain similarity on BLI performance, with three language pairs (en-es/fi/hu). Top row, (a)-(c): Domain similarity (higher is more similar) computed as dsim = 1 −JS, where JS is Jensen-Shannon divergence; Middle row, (d)-(f): baseline BLI model which learns a linear mapping between two monolingual spaces based on a set of identical (i.e., shared) words; Bottom row, (g)-(i): fully unsupervised BLI model relying on the distribution-level alignment and adversarial training. Both BLI models apply the Procrustes analysis and use CSLS to retrieve nearest neighbours. or comparable-domain, reasonably-sized training data using the same underlying algorithm. 4.5 Impact of dimensionality We also perform an experiment on 40-dimensional monolingual word embeddings. This leads to reduced expressivity, and can potentially make the geometric shapes of embedding spaces harder to align; on the other hand, reduced dimensionality may also lead to less overfitting. We generally see worse performance (P@1 is 50.33 for Spanish, 21.81 for Hungarian, 20.11 for Polish, and 22.03 for Turkish) – but, very interestingly, we obtain better performance for Estonian (13.53), Finnish (15.33), and Greek (24.17) than we did with 300 dimensions. We hypothesize this indicates monolingual word embedding algorithms over-fit to some of the rarer peculiarities of these languages. 785 English (skipgram, win=2, chn=3-6) Spanish Spanish (skipgram) (cbow) == 81.89 00.00 ̸= win=10 81.28 00.07 ̸= chn=2-7 80.74 00.00 ̸= win=10, chn=2-7 80.15 00.13 Table 3: Varying the underlying fastText algorithm and hyper-parameters. The first column lists differences in training configurations between English and Spanish monolingual embeddings. en-es en-hu en-fi Noun 80.94 26.87 00.00 Verb 66.05 25.44 00.00 Adjective 85.53 53.28 00.00 Adverb 80.00 51.57 00.00 Other 73.00 53.40 00.00 Table 4: P@1 × 100% scores for query words with different parts-of-speech. 4.6 Impact of evaluation procedure BDI models are evaluated on a held-out set of query words. Here, we analyze the performance of the unsupervised approach across different parts-ofspeech, frequency bins, and with respect to query words that have orthographically identical counterparts in the target language with the same or a different meaning. Part-of-speech We show the impact of the partof-speech of the query words in Table 4; again on a representative subset of our languages. The results indicate that performance on verbs is lowest across the board. This is consistent with research on distributional semantics and verb meaning (Schwartz et al., 2015; Gerz et al., 2016). Frequency We also investigate the impact of the frequency of query words. We calculate the word frequency of English words based on Google’s Trillion Word Corpus: query words are divided in groups based on their rank – i.e., the first group contains the top 100 most frequent words, the second one contains the 101th-1000th most frequent words, etc. – and plot performance (P@1) relative to rank in Figure 3. For EN-FI, P@1 was 0 across all frequency ranks. The plot shows sensitivity to frequency for HU, but less so for ES. Homographs Since we use identical word forms (homographs) for supervision, we investigated 20 40 60 80 100 1000 10000 P@1×100% Word frequency rank en-es en-hu Figure 3: P@1 scores for EN-ES and EN-HU for queries with different frequency ranks. Spelling Meaning en-es en-hu en-fi Same Same 45.94 18.07 00.00 Same Diff 39.66 29.97 00.00 Diff Diff 62.42 34.45 00.00 Table 5: Scores (P@1 × 100%) for query words with same and different spellings and meanings. whether these are representative or harder to align than other words. Table 5 lists performance for three sets of query words: (a) source words that have homographs (words that are spelled the same way) with the same meaning (homonyms) in the target language, e.g., many proper names; (b) source words that have homographs that are not homonyms in the target language, e.g., many short words; and (c) other words. Somewhat surprisingly, words which have translations that are homographs, are associated with lower precision than other words. This is probably due to loan words and proper names, but note that using homographs as supervision for alignment, we achieve high precision for this part of the vocabulary for free. 4.7 Evaluating eigenvector similarity Finally, in order to get a better understanding of the limitations of unsupervised BDI, we correlate the graph similarity metric described in §2 (right column of Table 2) with performance across languages (left column). Since we already established that the monolingual word embeddings are far from isomorphic—in contrast with the intuitions motivating previous work (Mikolov et al., 2013b; Barone, 2016; Zhang et al., 2017; Conneau et al., 2018)— we would like to establish another diagnostic metric that identifies embedding spaces for which the approach in Conneau et al. (2018) is likely to work. Differences in morphology, domain, or embedding parameters seem to be predictive of poor performance, but a metric that is independent of linguistic 786 0 20 40 60 80 1 2 3 4 5 6 7 Figure 4: Strong correlation (ρ = 0.89) between BDI performance (x) and graph similarity (y) categorizations and the characteristics of the monolingual corpora would be more widely applicable. We plot the values in Table 2 in Figure 4. Recall that our graph similarity metric returns a value in the half-open interval [0, ∞). The correlation between BDI performance and graph similarity is strong (ρ ∼0.89). 5 Related work Cross-lingual word embeddings Cross-lingual word embedding models typically, unlike Conneau et al. (2018), require aligned words, sentences, or documents (Levy et al., 2017). Most approaches based on word alignments learn an explicit mapping between the two embedding spaces (Mikolov et al., 2013b; Xing et al., 2015). Recent approaches try to minimize the amount of supervision needed (Vuli´c and Korhonen, 2016; Artetxe et al., 2017; Smith et al., 2017). See Upadhyay et al. (2016) and Ruder et al. (2018) for surveys. Unsupervised cross-lingual learning Haghighi et al. (2008) were first to explore unsupervised BDI, using features such as context counts and orthographic substrings, and canonical correlation analysis. Recent approaches use adversarial learning (Goodfellow et al., 2014) and employ a discriminator, trained to distinguish between the translated source and the target language space, and a generator learning a translation matrix (Barone, 2016). Zhang et al. (2017), in addition, use different forms of regularization for convergence, while Conneau et al. (2018) uses additional steps to refine the induced embedding space. Unsupervised machine translation Research on unsupervised machine translation (Lample et al., 2018a; Artetxe et al., 2018; Lample et al., 2018b) has generated a lot of interest recently with a promise to support the construction of MT systems for and between resource-poor languages. All unsupervised NMT methods critically rely on accurate unsupervised BDI and back-translation. Models are trained to reconstruct a corrupted version of the source sentence and to translate its translated version back to the source language. Since the crucial input to these systems are indeed cross-lingual word embedding spaces induced in an unsupervised fashion, in this paper we also implicitly investigate one core limitation of such unsupervised MT techniques. 6 Conclusion We investigated when unsupervised BDI (Conneau et al., 2018) is possible and found that differences in morphology, domains or word embedding algorithms may challenge this approach. Further, we found eigenvector similarity of sampled nearest neighbor subgraphs to be predictive of unsupervised BDI performance. We hope that this work will guide further developments in this new and exciting field. Acknowledgments We thank the anonymous reviewers, as well as Hinrich Schütze and Yova Kementchedjhieva, for their valuable feedback. Anders is supported by the ERC Starting Grant LOWLANDS No. 313695 and a Google Focused Research Award. Sebastian is supported by Irish Research Council Grant Number EBPPG/2014/30 and Science Foundation Ireland Grant Number SFI/12/RC/2289. Ivan is supported by the ERC Consolidator Grant LEXICAL No. 648909. References Rami Al-Rfou, Bryan Perozzi, and Steven Skiena. 2013. Polyglot: Distributed word representations for multilingual NLP. In Proceedings of CoNLL, pages 183–192. Mikel Artetxe, Gorka Labaka, and Eneko Agirre. 2017. Learning bilingual word embeddings with (almost) no bilingual data. In Proceedings of ACL, pages 451–462. Mikel Artetxe, Gorka Labaka, Eneko Agirre, and Kyunghyun Cho. 2018. Unsupervised neural machine translation. In Proceedings of ICLR (Conference Track). Antonio Valerio Miceli Barone. 2016. Towards crosslingual distributed representations without parallel 787 text trained with adversarial autoencoders. Proceedings of the 1st Workshop on Representation Learning for NLP, pages 121–126. John Blitzer, Mark Dredze, and Fernando Pereira. 2007. Biographies, Bollywood, boom-boxes and blenders: Domain adaptation for sentiment classification. In Proceedings of ACL, 1, pages 440–447. Piotr Bojanowski, Edouard Grave, Armand Joulin, and Tomas Mikolov. 2017. Enriching word vectors with subword information. Transactions of the Association for Computational Linguistics, 5:125–136. Jose Camacho-Collados, Mohammad Taher Pilehvar, Nigel Collier, and Roberto Navigli. 2017. SemEval2017 Task 2: Multilingual and cross-lingual semantic word similarity. In Proceedings of SEMEVAL, pages 15–26. Alexis Conneau, Guillaume Lample, Marc’Aurelio Ranzato, Ludovic Denoyer, and Hervé Jégou. 2018. Word translation without parallel data. Proceedings of ICLR. L. P. Cordella, P. Foggia, C. Sansone, and M. Vento. 2001. An improved algorithm for matching large graphs. Proceedings of the 3rd IAPR TC-15 Workshop on Graphbased Representations in Pattern Recognition, 17:1–35. Georgiana Dinu, Angeliki Lazaridou, and Marco Baroni. 2015. Improving zero-shot learning by mitigating the hubness problem. In Proceedings of ICLR (Workshop Papers). M. A El-Gamal. 1991. The role of priors in active Bayesian learning in the sequential statistical decision framework. In Maximum Entropy and Bayesian Methods, pages 33–38. Springer Netherlands. Yaroslav Ganin, Evgeniya Ustinova, Hana Ajakan, Pascal Germain, Hugo Larochelle, Francois Laviolette, Mario Marchand, and Victor Lempitsky. 2016. Domain-adversarial training of neural networks. Journal of Machine Learning Research, 17:1–35. Daniela Gerz, Ivan Vuli´c, Felix Hill, Roi Reichart, and Anna Korhonen. 2016. SimVerb-3500: A largescale evaluation set of verb similarity. In Proceedings of EMNLP, pages 2173–2182. Ian Goodfellow, Jean Pouget-Abadie, Mehdi Mirza, Bing Xu, David Warde-Farley, Sherjil Ozair, Aaron Courville, and Yoshua Bengio. 2014. Generative adversarial nets. In Proceedings of NIPS, pages 2672– 2680. Carolyn Gordon, David L. Webb, and Scott Wolpert. 1992. One cannot hear the shape of a drum. Bulletin of the American Mathematical Society. Aria Haghighi, Percy Liang, Taylor Berg-Kirkpatrick, and Dan Klein. 2008. Learning bilingual lexicons from monolingual corpora. In Proceedings of ACL, pages 771–779. Alexandre Klementiev, Ivan Titov, and Binod Bhattarai. 2012. Inducing crosslingual distributed representations of words. In Proceedings of COLING, pages 1459–1474. Philipp Koehn. 2005. Europarl: A parallel corpus for statistical machine translation. In Proceedings of the 10th Machine Translation Summit (MT SUMMIT), pages 79–86. Guillaume Lample, Ludovic Denoyer, and Marc’Aurelio Ranzato. 2018a. Unsupervised machine translation using monolingual corpora only. In Proceedings of ICLR (Conference Papers). Guillaume Lample, Myle Ott, Alexis Conneau, Ludovic Denoyer, and Marc’Aurelio Ranzato. 2018b. Phrase-based & neural unsupervised machine translation. CoRR, abs/1804.07755. Omer Levy, Anders Søgaard, and Yoav Goldberg. 2017. A strong baseline for learning cross-lingual word embeddings from sentence alignments. In Proceedings of EACL, pages 765–774. Nikola Ljubeši´c, Tommi Pirinen, and Antonio Toral. 2016. Finnish Web corpus fiWaC 1.0. Slovenian language resource repository CLARIN.SI. Tomas Mikolov, Kai Chen, Greg Corrado, and Jeffrey Dean. 2013a. Distributed representations of words and phrases and their compositionality. In Proceedings of NIPS, pages 3111–3119. Tomas Mikolov, Quoc V. Le, and Ilya Sutskever. 2013b. Exploiting similarities among languages for machine translation. Milos Radovanovi´c, Alexandros Nanopoulos, and Mirjana Ivanovic. 2010. Hubs in space: Popular nearest neighbors in high-dimensional data. Journal of Machine Learning Research, 11:2487–2531. Sebastian Ruder, Ivan Vuli´c, and Anders Søgaard. 2018. A survey of cross-lingual word embedding models. Journal of Artificial Intelligence Research. Roy Schwartz, Roi Reichart, and Ari Rappoport. 2015. Symmetric pattern based word embeddings for improved word similarity prediction. In Proceedings of CoNLL, pages 258–267. Vijayalaxmi Shigehalli and Vidya Shettar. 2011. Spectral technique using normalized adjacency matrices for graph matching. International Journal of Computational Science and Mathematics, 3:371–378. Samuel L. Smith, David H. P. Turban, Steven Hamblin, and Nils Y. Hammerla. 2017. Offline bilingual word vectors, orthogonal transformations and the inverted softmax. In Proceedings of ICLR (Conference Papers). Jörg Tiedemann. 2009. News from OPUS - A collection of multilingual parallel corpora with tools and interfaces. In Proceedings of RANLP, pages 237– 248. 788 Shyam Upadhyay, Manaal Faruqui, Chris Dyer, and Dan Roth. 2016. Cross-lingual models of word embeddings: An empirical comparison. In Proceedings of ACL, pages 1661–1670. Ivan Vuli´c and Anna Korhonen. 2016. On the role of seed lexicons in learning bilingual word embeddings. In Proceedings of ACL, pages 247–257. Ivan Vuli´c and Marie-Francine Moens. 2013. A study on bootstrapping bilingual vector spaces from nonparallel data (and nothing else). In Proceedings of EMNLP, pages 1613–1624. Chao Xing, Chao Liu, Dong Wang, and Yiye Lin. 2015. Normalized word embedding and orthogonal transform for bilingual word translation. In Proceedings of NAACL-HLT, pages 1005–1010. Meng Zhang, Yang Liu, Huanbo Luan, and Maosong Sun. 2017. Adversarial training for unsupervised bilingual lexicon induction. In Proceedings of ACL, pages 1959–1970.
2018
72
Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Long Papers), pages 789–798 Melbourne, Australia, July 15 - 20, 2018. c⃝2018 Association for Computational Linguistics 789 A robust self-learning method for fully unsupervised cross-lingual mappings of word embeddings Mikel Artetxe and Gorka Labaka and Eneko Agirre IXA NLP Group University of the Basque Country (UPV/EHU) {mikel.artetxe,gorka.labaka,e.agirre}@ehu.eus Abstract Recent work has managed to learn crosslingual word embeddings without parallel data by mapping monolingual embeddings to a shared space through adversarial training. However, their evaluation has focused on favorable conditions, using comparable corpora or closely-related languages, and we show that they often fail in more realistic scenarios. This work proposes an alternative approach based on a fully unsupervised initialization that explicitly exploits the structural similarity of the embeddings, and a robust self-learning algorithm that iteratively improves this solution. Our method succeeds in all tested scenarios and obtains the best published results in standard datasets, even surpassing previous supervised systems. Our implementation is released as an open source project at https://github. com/artetxem/vecmap. 1 Introduction Cross-lingual embedding mappings have shown to be an effective way to learn bilingual word embeddings (Mikolov et al., 2013; Lazaridou et al., 2015). The underlying idea is to independently train the embeddings in different languages using monolingual corpora, and then map them to a shared space through a linear transformation. This allows to learn high-quality cross-lingual representations without expensive supervision, opening new research avenues like unsupervised neural machine translation (Artetxe et al., 2018b; Lample et al., 2018). While most embedding mapping methods rely on a small seed dictionary, adversarial training has recently produced exciting results in fully unsupervised settings (Zhang et al., 2017a,b; Conneau et al., 2018). However, their evaluation has focused on particularly favorable conditions, limited to closely-related languages or comparable Wikipedia corpora. When tested on more realistic scenarios, we find that they often fail to produce meaningful results. For instance, none of the existing methods works in the standard EnglishFinnish dataset from Artetxe et al. (2017), obtaining translation accuracies below 2% in all cases (see Section 5). On another strand of work, Artetxe et al. (2017) showed that an iterative self-learning method is able to bootstrap a high quality mapping from very small seed dictionaries (as little as 25 pairs of words). However, their analysis reveals that the self-learning method gets stuck in poor local optima when the initial solution is not good enough, thus failing for smaller training dictionaries. In this paper, we follow this second approach and propose a new unsupervised method to build an initial solution without the need of a seed dictionary, based on the observation that, given the similarity matrix of all words in the vocabulary, each word has a different distribution of similarity values. Two equivalent words in different languages should have a similar distribution, and we can use this fact to induce the initial set of word pairings (see Figure 1). We combine this initialization with a more robust self-learning method, which is able to start from the weak initial solution and iteratively improve the mapping. Coupled together, we provide a fully unsupervised crosslingual mapping method that is effective in realistic settings, converges to a good solution in all cases tested, and sets a new state-of-the-art in bilingual lexicon extraction, even surpassing previous supervised methods. 790 EN − two IT − due (two) IT − cane (dog) 0 100 200 300 −0.02 −0.01 0.00 0.01 0.02 −0.02 −0.01 0.00 0.01 0.02 −0.02 −0.01 0.00 0.01 0.02 Figure 1: Motivating example for our unsupervised initialization method, showing the similarity distributions of three words (corresponding to the smoothed density estimates from the normalized square root of the similarity matrices as defined in Section 3.2). Equivalent translations (two and due) have more similar distributions than non-related words (two and cane - meaning dog). This observation is used to build an initial solution that is later improved through self-learning. 2 Related work Cross-lingual embedding mapping methods work by independently training word embeddings in two languages, and then mapping them to a shared space using a linear transformation. Most of these methods are supervised, and use a bilingual dictionary of a few thousand entries to learn the mapping. Existing approaches can be classified into regression methods, which map the embeddings in one language using a leastsquares objective (Mikolov et al., 2013; Shigeto et al., 2015; Dinu et al., 2015), canonical methods, which map the embeddings in both languages to a shared space using canonical correlation analysis and extensions of it (Faruqui and Dyer, 2014; Lu et al., 2015), orthogonal methods, which map the embeddings in one or both languages under the constraint of the transformation being orthogonal (Xing et al., 2015; Artetxe et al., 2016; Zhang et al., 2016; Smith et al., 2017), and margin methods, which map the embeddings in one language to maximize the margin between the correct translations and the rest of the candidates (Lazaridou et al., 2015). Artetxe et al. (2018a) showed that many of them could be generalized as part of a multi-step framework of linear transformations. A related research line is to adapt these methods to the semi-supervised scenario, where the training dictionary is much smaller and used as part of a bootstrapping process. While similar ideas where already explored for traditional count-based vector space models (Peirsman and Pad´o, 2010; Vuli´c and Moens, 2013), Artetxe et al. (2017) brought this approach to pre-trained low-dimensional word embeddings, which are more widely used nowadays. More concretely, they proposed a selflearning approach that alternates the mapping and dictionary induction steps iteratively, obtaining results that are comparable to those of supervised methods when starting with only 25 word pairs. A practical approach for reducing the need of bilingual supervision is to design heuristics to build the seed dictionary. The role of the seed lexicon in learning cross-lingual embedding mappings is analyzed in depth by Vuli´c and Korhonen (2016), who propose using document-aligned corpora to extract the training dictionary. A more common approach is to rely on shared words and cognates (Peirsman and Pad´o, 2010; Smith et al., 2017), while Artetxe et al. (2017) go further and restrict themselves to shared numerals. However, while these approaches are meant to eliminate the need of bilingual data in practice, they also make strong assumptions on the writing systems of languages (e.g. that they all use a common alphabet or Arabic numerals). Closer to our work, a recent line of fully unsupervised approaches drops these assumptions completely, and attempts to learn cross-lingual embedding mappings based on distributional information alone. For that purpose, existing methods rely on adversarial training. This was first proposed by Miceli Barone (2016), who combine an encoder that maps source language embeddings into the target language, a decoder that reconstructs the source language embeddings from the mapped embeddings, and a discriminator that discriminates between the mapped embeddings and the true target language embed791 dings. Despite promising, they conclude that their model “is not competitive with other cross-lingual representation approaches”. Zhang et al. (2017a) use a very similar architecture, but incorporate additional techniques like noise injection to aid training and report competitive results on bilingual lexicon extraction. Conneau et al. (2018) drop the reconstruction component, regularize the mapping to be orthogonal, and incorporate an iterative refinement process akin to self-learning, reporting very strong results on a large bilingual lexicon extraction dataset. Finally, Zhang et al. (2017b) adopt the earth mover’s distance for training, optimized through a Wasserstein generative adversarial network followed by an alternating optimization procedure. However, all this previous work used comparable Wikipedia corpora in most experiments and, as shown in Section 5, face difficulties in more challenging settings. 3 Proposed method Let X and Z be the word embedding matrices in two languages, so that their ith row Xi∗and Zi∗ denote the embeddings of the ith word in their respective vocabularies. Our goal is to learn the linear transformation matrices WX and WZ so the mapped embeddings XWX and ZWZ are in the same cross-lingual space. At the same time, we aim to build a dictionary between both languages, encoded as a sparse matrix D where Dij = 1 if the jth word in the target language is a translation of the ith word in the source language. Our proposed method consists of four sequential steps: a pre-processing that normalizes the embeddings (§3.1), a fully unsupervised initialization scheme that creates an initial solution (§3.2), a robust self-learning procedure that iteratively improves this solution (§3.3), and a final refinement step that further improves the resulting mapping through symmetric re-weighting (§3.4). 3.1 Embedding normalization Our method starts with a pre-processing that length normalizes the embeddings, then mean centers each dimension, and then length normalizes them again. The first two steps have been shown to be beneficial in previous work (Artetxe et al., 2016), while the second length normalization guarantees the final embeddings to have a unit length. As a result, the dot product of any two embeddings is equivalent to their cosine similarity and directly related to their Euclidean distance1, and can be taken as a measure of their similarity. 3.2 Fully unsupervised initialization The underlying difficulty of the mapping problem in its unsupervised variant is that the word embedding matrices X and Z are unaligned across both axes: neither the ith vocabulary item Xi∗and Zi∗ nor the jth dimension of the embeddings X∗j and Z∗j are aligned, so there is no direct correspondence between both languages. In order to overcome this challenge and build an initial solution, we propose to first construct two alternative representations X′ and Z′ that are aligned across their jth dimension X′ ∗j and Z′ ∗j, which can later be used to build an initial dictionary that aligns their respective vocabularies. Our approach is based on a simple idea: while the axes of the original embeddings X and Z are different in nature, both axes of their corresponding similarity matrices MX = XXT and MZ = ZZT correspond to words, which can be exploited to reduce the mismatch to a single axis. More concretely, assuming that the embedding spaces are perfectly isometric, the similarity matrices MX and MZ would be equivalent up to a permutation of their rows and columns, where the permutation in question defines the dictionary across both languages. In practice, the isometry requirement will not hold exactly, but it can be assumed to hold approximately, as the very same problem of mapping two embedding spaces without supervision would otherwise be hopeless. Based on that, one could try every possible permutation of row and column indices to find the best match between MX and MZ, but the resulting combinatorial explosion makes this approach intractable. In order to overcome this problem, we propose to first sort the values in each row of MX and MZ, resulting in matrices sorted(MX) and sorted(MZ)2. Under the strict isometry condition, equivalent words would get the exact same vector across languages, and thus, given a word and its row in sorted(MX), one could apply nearest neighbor retrieval over the rows of sorted(MZ) to find its corresponding translation. On a final note, given the singular value decomposition X = USV T , the similarity matrix 1Given two length normalized vectors u and v, u · v = cos(u, v) = 1 −||u −v||2/2. 2Note that the values in each row are sorted independently from other rows. 792 is MX = US2U T . As such, its square root √MX = USUT is closer in nature to the original embeddings, and we also find it to work better in practice. We thus compute sorted(√MX) and sorted(√MZ) and normalize them as described in Section 3.1, yielding the two matrices X′ and Z′ that are later used to build the initial solution for self-learning (see Section 3.3). In practice, the isometry assumption is strong enough so the above procedure captures some cross-lingual signal. In our English-Italian experiments, the average cosine similarity across the gold standard translation pairs is 0.009 for a random solution, 0.582 for the optimal supervised solution, and 0.112 for the mapping resulting from this initialization. While the latter is far from being useful on its own (the accuracy of the resulting dictionary is only 0.52%), it is substantially better than chance, and it works well as an initial solution for the self-learning method described next. 3.3 Robust self-learning Previous work has shown that self-learning can learn high-quality bilingual embedding mappings starting with as little as 25 word pairs (Artetxe et al., 2017). In this method, training iterates through the following two steps until convergence: 1. Compute the optimal orthogonal mapping maximizing the similarities for the current dictionary D: arg max WX,WZ X i X j Dij((Xi∗WX) · (Zj∗WZ)) An optimal solution is given by WX = U and WZ = V , where USV T = XT DZ is the singular value decomposition of XT DZ. 2. Compute the optimal dictionary over the similarity matrix of the mapped embeddings XWXW T Z ZT . This typically uses nearest neighbor retrieval from the source language into the target language, so Dij = 1 if j = argmaxk (Xi∗WX) · (Zk∗WZ) and Dij = 0 otherwise. The underlying optimization objective is independent from the initial dictionary, and the algorithm is guaranteed to converge to a local optimum of it. However, the method does not work if starting from a completely random solution, as it tends to get stuck in poor local optima in that case. For that reason, we use the unsupervised initialization procedure at Section 3.2 to build an initial solution. However, simply plugging in both methods did not work in our preliminary experiments, as the quality of this initial method is not good enough to avoid poor local optima. For that reason, we next propose some key improvements in the dictionary induction step to make self-learning more robust and learn better mappings: • Stochastic dictionary induction. In order to encourage a wider exploration of the search space, we make the dictionary induction stochastic by randomly keeping some elements in the similarity matrix with probability p and setting the remaining ones to 0. As a consequence, the smaller the value of p is, the more the induced dictionary will vary from iteration to iteration, thus enabling to escape poor local optima. So as to find a fine-grained solution once the algorithm gets into a good region, we increase this value during training akin to simulated annealing, starting with p = 0.1 and doubling this value every time the objective function at step 1 above does not improve more than ǫ = 10−6 for 50 iterations. • Frequency-based vocabulary cutoff. The size of the similarity matrix grows quadratically with respect to that of the vocabularies. This does not only increase the cost of computing it, but it also makes the number of possible solutions grow exponentially3, presumably making the optimization problem harder. Given that less frequent words can be expected to be noisier, we propose to restrict the dictionary induction process to the k most frequent words in each language, where we find k = 20, 000 to work well in practice. • CSLS retrieval. Dinu et al. (2015) showed that nearest neighbor suffers from the hubness problem. This phenomenon is known to occur as an effect of the curse of dimensionality, and causes a few points (known as hubs) to be nearest neighbors of many other points (Radovanovi´c et al., 2010a,b). Among the existing solutions to penalize the similarity score of hubs, we adopt the Cross-domain 3There are mn possible combinations that go from a source vocabulary of n entries to a target vocabulary of m entries. 793 Similarity Local Scaling (CSLS) from Conneau et al. (2018). Given two mapped embeddings x and y, the idea of CSLS is to compute rT(x) and rS(y), the average cosine similarity of x and y for their k nearest neighbors in the other language, respectively. Having done that, the corrected score CSLS(x, y) = 2 cos(x, y) −rT(x) −rS(y). Following the authors, we set k = 10. • Bidirectional dictionary induction. When the dictionary is induced from the source into the target language, not all target language words will be present in it, and some will occur multiple times. We argue that this might accentuate the problem of local optima, as repeated words might act as strong attractors from which it is difficult to escape. In order to mitigate this issue and encourage diversity, we propose inducing the dictionary in both directions and taking their corresponding concatenation, so D = DX→Z +DZ→X. In order to build the initial dictionary, we compute X′ and Z′ as detailed in Section 3.2 and apply the above procedure over them. As the only difference, this first solution does not use the stochastic zeroing in the similarity matrix, as there is no need to encourage diversity (X′ and Z′ are only used once), and the threshold for vocabulary cutoff is set to k = 4, 000, so X′ and Z′ can fit in memory. Having computed the initial dictionary, X′ and Z′ are discarded, and the remaining iterations are performed over the original embeddings X and Z. 3.4 Symmetric re-weighting As part of their multi-step framework, Artetxe et al. (2018a) showed that re-weighting the target language embeddings according to the crosscorrelation in each component greatly improved the quality of the induced dictionary. Given the singular value decomposition USV T = XT DZ, this is equivalent to taking WX = U and WZ = V S, where X and Z are previously whitened applying the linear transformations (XT X)−1 2 and (ZT Z)−1 2 , and later de-whitened applying U T (XT X) 1 2 U and V T (ZT Z) 1 2 V . However, re-weighting also accentuates the problem of local optima when incorporated into self-learning as, by increasing the relevance of dimensions that best match for the current solution, it discourages to explore other regions of the search space. For that reason, we propose using it as a final step once self-learning has converged to a good solution. Unlike Artetxe et al. (2018a), we apply re-weighting symmetrically in both languages, taking WX = US 1 2 and WZ = V S 1 2 . This approach is neutral in the direction of the mapping, and gives good results as shown in our experiments. 4 Experimental settings Following common practice, we evaluate our method on bilingual lexicon extraction, which measures the accuracy of the induced dictionary in comparison to a gold standard. As discussed before, previous evaluation has focused on favorable conditions. In particular, existing unsupervised methods have almost exclusively been tested on Wikipedia corpora, which is comparable rather than monolingual, exposing a strong cross-lingual signal that is not available in strictly unsupervised settings. In addition to that, some datasets comprise unusually small embeddings, with only 50 dimensions and around 5,00010,000 vocabulary items (Zhang et al., 2017a,b). As the only exception, Conneau et al. (2018) report positive results on the English-Italian dataset of Dinu et al. (2015) in addition to their main experiments, which are carried out in Wikipedia. While this dataset does use strictly monolingual corpora, it still corresponds to a pair of two relatively close indo-european languages. In order to get a wider picture of how our method compares to previous work in different conditions, including more challenging settings, we carry out our experiments in the widely used dataset of Dinu et al. (2015) and the subsequent extensions of Artetxe et al. (2017, 2018a), which together comprise English-Italian, English-German, English-Finnish and EnglishSpanish. More concretely, the dataset consists of 300-dimensional CBOW embeddings trained on WacKy crawling corpora (English, Italian, German), Common Crawl (Finnish) and WMT News Crawl (Spanish). The gold standards were derived from dictionaries built from Europarl word alignments and available at OPUS (Tiedemann, 2012), split in a test set of 1,500 entries and a training set of 5,000 that we do not use in our experiments. The datasets are freely available. As a non-european agglutinative language, the English-Finnish pair is particularly challeng794 ES-EN IT-EN TR-EN best avg s t best avg s t best avg s t Zhang et al. (2017a), λ = 1 71.43 68.18 10 13.2 60.38 56.45 10 12.3 0.00 0.00 0 13.0 Zhang et al. (2017a), λ = 10 70.24 66.37 10 13.0 57.64 52.60 10 12.6 21.07 17.95 10 13.2 Conneau et al. (2018), code 76.18 75.82 10 25.1 67.32 67.00 10 25.9 32.64 14.34 5 25.3 Conneau et al. (2018), paper 76.15 75.81 10 25.1 67.21 60.22 9 25.5 29.79 16.48 7 25.5 Proposed method 76.43 76.28 10 0.6 66.96 66.92 10 0.9 36.10 35.93 10 1.7 Table 1: Results of unsupervised methods on the dataset of Zhang et al. (2017a). We perform 10 runs for each method and report the best and average accuracies (%), the number of successful runs (those with >5% accuracy) and the average runtime (minutes). EN-IT EN-DE EN-FI EN-ES best avg s t best avg s t best avg s t best avg s t Zhang et al. (2017a), λ = 1 0.00 0.00 0 47.0 0.00 0.00 0 47.0 0.00 0.00 0 45.4 0.00 0.00 0 44.3 Zhang et al. (2017a), λ = 10 0.00 0.00 0 46.6 0.00 0.00 0 46.0 0.07 0.01 0 44.9 0.07 0.01 0 43.0 Conneau et al. (2018), code 45.40 13.55 3 46.1 47.27 42.15 9 45.4 1.62 0.38 0 44.4 36.20 21.23 6 45.3 Conneau et al. (2018), paper 45.27 9.10 2 45.4 0.07 0.01 0 45.0 0.07 0.01 0 44.7 35.47 7.09 2 44.9 Proposed method 48.53 48.13 10 8.9 48.47 48.19 10 7.3 33.50 32.63 10 12.9 37.60 37.33 10 9.1 Table 2: Results of unsupervised methods on the dataset of Dinu et al. (2015) and the extensions of Artetxe et al. (2017, 2018a). We perform 10 runs for each method and report the best and average accuracies (%), the number of successful runs (those with >5% accuracy) and the average runtime (minutes). ing due to the linguistic distance between them. For completeness, we also test our method in the Spanish-English, Italian-English and TurkishEnglish datasets of Zhang et al. (2017a), which consist of 50-dimensional CBOW embeddings trained on Wikipedia, as well as gold standard dictionaries4 from Open Multilingual WordNet (Spanish-English and Italian-English) and Google Translate (Turkish-English). The lower dimensionality and comparable corpora make an easier scenario, although it also contains a challenging pair of distant languages (Turkish-English). Our method is implemented in Python using NumPy and CuPy. Together with it, we also test the methods of Zhang et al. (2017a) and Conneau et al. (2018) using the publicly available implementations from the authors5. Given that Zhang et al. (2017a) report using a different value of their hyperparameter λ for different language pairs (λ = 10 for English-Turkish and λ = 1 for the rest), we test both values in all our experiments to 4The test dictionaries were obtained through personal communication with the authors. The rest of the language pairs were left out due to licensing issues. 5Despite our efforts, Zhang et al. (2017b) was left out because: 1) it does not create a one-to-one dictionary, thus difficulting direct comparison, 2) it depends on expensive proprietary software 3) its computational cost is orders of magnitude higher (running the experiments would have taken several months). better understand its effect. In the case of Conneau et al. (2018), we test both the default hyperparameters in the source code as well as those reported in the paper, with iterative refinement activated in both cases. Given the instability of these methods, we perform 10 runs for each, and report the best and average accuracies, the number of successful runs (those with >5% accuracy) and the average runtime. All the experiments were run in a single Nvidia Titan Xp. 5 Results and discussion We first present the main results (§5.1), then the comparison to the state-of-the-art (§5.2), and finally ablation tests to measure the contribution of each component (§5.3). 5.1 Main results We report the results in the dataset of Zhang et al. (2017a) at Table 1. As it can be seen, the proposed method performs at par with that of Conneau et al. (2018) both in Spanish-English and Italian-English, but gets substantially better results in the more challenging Turkish-English pair. While we are able to reproduce the results reported by Zhang et al. (2017a), their method gets the worst results of all by a large margin. Another disadvantage of that model is that different 795 Supervision Method EN-IT EN-DE EN-FI EN-ES 5k dict. Mikolov et al. (2013) 34.93† 35.00† 25.91† 27.73† Faruqui and Dyer (2014) 38.40* 37.13* 27.60* 26.80* Shigeto et al. (2015) 41.53† 43.07† 31.04† 33.73† Dinu et al. (2015) 37.7 38.93* 29.14* 30.40* Lazaridou et al. (2015) 40.2 Xing et al. (2015) 36.87† 41.27† 28.23† 31.20† Zhang et al. (2016) 36.73† 40.80† 28.16† 31.07† Artetxe et al. (2016) 39.27 41.87* 30.62* 31.40* Artetxe et al. (2017) 39.67 40.87 28.72 Smith et al. (2017) 43.1 43.33† 29.42† 35.13† Artetxe et al. (2018a) 45.27 44.13 32.94 36.60 25 dict. Artetxe et al. (2017) 37.27 39.60 28.16 Init. Smith et al. (2017), cognates 39.9 heurist. Artetxe et al. (2017), num. 39.40 40.27 26.47 None Zhang et al. (2017a), λ = 1 0.00* 0.00* 0.00* 0.00* Zhang et al. (2017a), λ = 10 0.00* 0.00* 0.01* 0.01* Conneau et al. (2018), code‡ 45.15* 46.83* 0.38* 35.38* Conneau et al. (2018), paper‡ 45.1 0.01* 0.01* 35.44* Proposed method 48.13 48.19 32.63 37.33 Table 3: Accuracy (%) of the proposed method in comparison with previous work. *Results obtained with the official implementation from the authors. †Results obtained with the framework from Artetxe et al. (2018a). The remaining results were reported in the original papers. For methods that do not require supervision, we report the average accuracy across 10 runs. ‡For meaningful comparison, runs with <5% accuracy are excluded when computing the average, but note that, unlike ours, their method often gives a degenerated solution (see Table 2). language pairs require different hyperparameters: λ = 1 works substantially better for SpanishEnglish and Italian-English, but only λ = 10 works for Turkish-English. The results for the more challenging dataset from Dinu et al. (2015) and the extensions of Artetxe et al. (2017, 2018a) are given in Table 2. In this case, our proposed method obtains the best results in all metrics for all the four language pairs tested. The method of Zhang et al. (2017a) does not work at all in this more challenging scenario, which is in line with the negative results reported by the authors themselves for similar conditions (only %2.53 accuracy in their large Gigaword dataset). The method of Conneau et al. (2018) also fails for English-Finnish (only 1.62% in the best run), although it is able to get positive results in some runs for the rest of language pairs. Between the two configurations tested, the default hyperparameters in the code show a more stable behavior. These results confirm the robustness of the proposed method. While the other systems succeed in some runs and fail in others, our method converges to a good solution in all runs without exception and, in fact, it is the only one getting positive results for English-Finnish. In addition to being more robust, our method also obtains substantially better accuracies, surpassing previous methods by at least 1-3 points in all but the easiest pairs. Moreover, our method is not sensitive to hyperparameters that are difficult to tune without a development set, which is critical in realistic unsupervised conditions. At the same time, our method is significantly faster than the rest. In relation to that, it is interesting that, while previous methods perform a fixed number of iterations and take practically the same time for all the different language pairs, the runtime of our method adapts to the difficulty of the task thanks to the dynamic convergence criterion of our stochastic approach. This way, our method tends to take longer for more challenging language pairs (1.7 vs 0.6 minutes for es-en and tr-en in one dataset, and 12.9 vs 7.3 minutes for en-fiand en-de in the other) and, in fact, our (relative) execution times correlate surprisingly well with the linguistic distance with English (closest/fastest is German, followed by Italian/Spanish, followed by Turkish/Finnish). 796 EN-IT EN-DE EN-FI EN-ES best avg s t best avg s t best avg s t best avg s t Full system 48.53 48.13 10 8.9 48.47 48.19 10 7.3 33.50 32.63 10 12.9 37.60 37.33 10 9.1 - Unsup. init. 0.07 0.02 0 16.5 0.00 0.00 0 17.3 0.07 0.01 0 13.8 0.13 0.02 0 15.9 - Stochastic 48.20 48.20 10 2.7 48.13 48.13 10 2.5 0.28 0.28 0 4.3 37.80 37.80 10 2.6 - Cutoff (k=100k) 46.87 46.46 10 114.5 48.27 48.12 10 105.3 31.95 30.78 10 162.5 35.47 34.88 10 185.2 - CSLS 0.00 0.00 0 15.0 0.00 0.00 0 13.8 0.00 0.00 0 13.1 0.00 0.00 0 14.1 - Bidirectional 46.00 45.37 10 5.6 48.27 48.03 10 5.5 31.39 24.86 8 7.8 36.20 35.77 10 7.3 - Re-weighting 46.07 45.61 10 8.4 48.13 47.41 10 7.0 32.94 31.77 10 11.2 36.00 35.45 10 9.1 Table 4: Ablation test on the dataset of Dinu et al. (2015) and the extensions of Artetxe et al. (2017, 2018a). We perform 10 runs for each method and report the best and average accuracies (%), the number of successful runs (those with >5% accuracy) and the average runtime (minutes). 5.2 Comparison with the state-of-the-art Table 3 shows the results of the proposed method in comparison to previous systems, including those with different degrees of supervision. We focus on the widely used English-Italian dataset of Dinu et al. (2015) and its extensions. Despite being fully unsupervised, our method achieves the best results in all language pairs but one, even surpassing previous supervised approaches. The only exception is English-Finnish, where Artetxe et al. (2018a) gets marginally better results with a difference of 0.3 points, yet ours is the only unsupervised system that works for this pair. At the same time, it is remarkable that the proposed system gets substantially better results than Artetxe et al. (2017), the only other system based on selflearning, with the additional advantage of being fully unsupervised. 5.3 Ablation test In order to better understand the role of different aspects in the proposed system, we perform an ablation test, where we separately analyze the effect of initialization, the different components of our robust self-learning algorithm, and the final symmetric re-weighting. The obtained results are reported in Table 4. In concordance with previous work, our results show that self-learning does not work with random initialization. However, the proposed unsupervised initialization is able to overcome this issue without the need of any additional information, performing at par with other character-level heuristics that we tested (e.g. shared numerals). As for the different self-learning components, we observe that the stochastic dictionary induction is necessary to overcome the problem of poor local optima for English-Finnish, although it does not make any difference for the rest of easier language pairs. The frequency-based vocabulary cutoff also has a positive effect, yielding to slightly better accuracies and much faster runtimes. At the same time, CSLS plays a critical role in the system, as hubness severely accentuates the problem of local optima in its absence. The bidirectional dictionary induction is also beneficial, contributing to the robustness of the system as shown by English-Finnish and yielding to better accuracies in all cases. Finally, these results also show that symmetric re-weighting contributes positively, bringing an improvement of around 1-2 points without any cost in the execution time. 6 Conclusions In this paper, we show that previous unsupervised mapping methods (Zhang et al., 2017a; Conneau et al., 2018) often fail on realistic scenarios involving non-comparable corpora and/or distant languages. In contrast to adversarial methods, we propose to use an initial weak mapping that exploits the structure of the embedding spaces in combination with a robust self-learning approach. The results show that our method succeeds in all cases, providing the best results with respect to all previous work on unsupervised and supervised mappings. The ablation analysis shows that our initial solution is instrumental for making self-learning work without supervision. In order to make selflearning robust, we also added stochasticity to dictionary induction, used CSLS instead of nearest neighbor, and produced bidirectional dictionaries. Results also improved using smaller in797 termediate vocabularies and re-weighting the final solution. Our implementation is available as an open source project at https://github. com/artetxem/vecmap. In the future, we would like to extend the method from the bilingual to the multilingual scenario, and go beyond the word level by incorporating embeddings of longer phrases. Acknowledgments This research was partially supported by the Spanish MINECO (TUNER TIN2015-65308-C51-R, MUSTER PCIN-2015-226 and TADEEP TIN2015-70214-P, cofunded by EU FEDER), the UPV/EHU (excellence research group), and the NVIDIA GPU grant program. Mikel Artetxe enjoys a doctoral grant from the Spanish MECD. References Mikel Artetxe, Gorka Labaka, and Eneko Agirre. 2016. Learning principled bilingual mappings of word embeddings while preserving monolingual invariance. In Proceedings of the 2016 Conference on Empirical Methods in Natural Language Processing, pages 2289–2294, Austin, Texas. Association for Computational Linguistics. Mikel Artetxe, Gorka Labaka, and Eneko Agirre. 2017. Learning bilingual word embeddings with (almost) no bilingual data. In Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 451–462, Vancouver, Canada. Association for Computational Linguistics. Mikel Artetxe, Gorka Labaka, and Eneko Agirre. 2018a. Generalizing and improving bilingual word embedding mappings with a multi-step framework of linear transformations. In Proceedings of the Thirty-Second AAAI Conference on Artificial Intelligence (AAAI-18), pages 5012–5019. Mikel Artetxe, Gorka Labaka, Eneko Agirre, and Kyunghyun Cho. 2018b. Unsupervised neural machine translation. In Proceedings of the 6th International Conference on Learning Representations (ICLR 2018). Alexis Conneau, Guillaume Lample, Marc’Aurelio Ranzato, Ludovic Denoyer, and Herv´e J´egou. 2018. Word translation without parallel data. In Proceedings of the 6th International Conference on Learning Representations (ICLR 2018). Georgiana Dinu, Angeliki Lazaridou, and Marco Baroni. 2015. Improving zero-shot learning by mitigating the hubness problem. In Proceedings of the 3rd International Conference on Learning Representations (ICLR 2015), workshop track. Manaal Faruqui and Chris Dyer. 2014. Improving vector space word representations using multilingual correlation. In Proceedings of the 14th Conference of the European Chapter of the Association for Computational Linguistics, pages 462–471, Gothenburg, Sweden. Association for Computational Linguistics. Guillaume Lample, Alexis Conneau, Ludovic Denoyer, and Marc’Aurelio Ranzato. 2018. Unsupervised machine translation using monolingual corpora only. In Proceedings of the 6th International Conference on Learning Representations (ICLR 2018). Angeliki Lazaridou, Georgiana Dinu, and Marco Baroni. 2015. Hubness and pollution: Delving into cross-space mapping for zero-shot learning. In Proceedings of the 53rd Annual Meeting of the Association for Computational Linguistics and the 7th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), pages 270– 280, Beijing, China. Association for Computational Linguistics. Ang Lu, Weiran Wang, Mohit Bansal, Kevin Gimpel, and Karen Livescu. 2015. Deep multilingual correlation for improved word embeddings. In Proceedings of the 2015 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 250–256, Denver, Colorado. Association for Computational Linguistics. Antonio Valerio Miceli Barone. 2016. Towards crosslingual distributed representations without parallel text trained with adversarial autoencoders. In Proceedings of the 1st Workshop on Representation Learning for NLP, pages 121–126, Berlin, Germany. Association for Computational Linguistics. Tomas Mikolov, Quoc V Le, and Ilya Sutskever. 2013. Exploiting similarities among languages for machine translation. arXiv preprint arXiv:1309.4168. Yves Peirsman and Sebastian Pad´o. 2010. Crosslingual induction of selectional preferences with bilingual vector spaces. In Human Language Technologies: The 2010 Annual Conference of the North American Chapter of the Association for Computational Linguistics, pages 921–929, Los Angeles, California. Association for Computational Linguistics. Miloˇs Radovanovi´c, Alexandros Nanopoulos, and Mirjana Ivanovi´c. 2010a. Hubs in space: Popular nearest neighbors in high-dimensional data. Journal of Machine Learning Research, 11(Sep):2487–2531. Milos Radovanovi´c, Alexandros Nanopoulos, and Mirjana Ivanovi´c. 2010b. On the existence of obstinate results in vector space models. In Proceedings of the 33rd international ACM SIGIR conference on Research and development in information retrieval, pages 186–193. 798 Yutaro Shigeto, Ikumi Suzuki, Kazuo Hara, Masashi Shimbo, and Yuji Matsumoto. 2015. Ridge regression, hubness, and zero-shot learning. Machine Learning and Knowledge Discovery in Databases: European Conference, ECML PKDD 2015, Proceedings, Part I, pages 135–151. Samuel L Smith, David HP Turban, Steven Hamblin, and Nils Y Hammerla. 2017. Offline bilingual word vectors, orthogonal transformations and the inverted softmax. In Proceedings of the 5th International Conference on Learning Representations (ICLR 2017). J¨org Tiedemann. 2012. Parallel data, tools and interfaces in opus. In Proceedings of the Eighth International Conference on Language Resources and Evaluation (LREC-2012), pages 2214–2218, Istanbul, Turkey. European Language Resources Association (ELRA). Ivan Vuli´c and Anna Korhonen. 2016. On the role of seed lexicons in learning bilingual word embeddings. In Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 247–257, Berlin, Germany. Association for Computational Linguistics. Ivan Vuli´c and Marie-Francine Moens. 2013. A study on bootstrapping bilingual vector spaces from nonparallel data (and nothing else). In Proceedings of the 2013 Conference on Empirical Methods in Natural Language Processing, pages 1613–1624, Seattle, Washington, USA. Association for Computational Linguistics. Chao Xing, Dong Wang, Chao Liu, and Yiye Lin. 2015. Normalized word embedding and orthogonal transform for bilingual word translation. In Proceedings of the 2015 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 1006–1011, Denver, Colorado. Association for Computational Linguistics. Meng Zhang, Yang Liu, Huanbo Luan, and Maosong Sun. 2017a. Adversarial training for unsupervised bilingual lexicon induction. In Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 1959–1970, Vancouver, Canada. Association for Computational Linguistics. Meng Zhang, Yang Liu, Huanbo Luan, and Maosong Sun. 2017b. Earth mover’s distance minimization for unsupervised bilingual lexicon induction. In Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing, pages 1934–1945, Copenhagen, Denmark. Association for Computational Linguistics. Yuan Zhang, David Gaddy, Regina Barzilay, and Tommi Jaakkola. 2016. Ten pairs to tag – multilingual pos tagging via coarse mapping between embeddings. In Proceedings of the 2016 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 1307–1317, San Diego, California. Association for Computational Linguistics.
2018
73
Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Long Papers), pages 799–809 Melbourne, Australia, July 15 - 20, 2018. c⃝2018 Association for Computational Linguistics 799 A Multi-lingual Multi-task Architecture for Low-resource Sequence Labeling Ying Lin1 ∗, Shengqi Yang2, Veselin Stoyanov3, Heng Ji1 1 Computer Science Department, Rensselaer Polytechnic Institute, Troy, NY, USA {liny9,jih}@rpi.edu 2 Intelligent Advertising Lab, JD.com, Santa Clara, CA, USA [email protected] 3 Applied Machine Learning, Facebook, Menlo Park, CA, USA [email protected] Abstract We propose a multi-lingual multi-task architecture to develop supervised models with a minimal amount of labeled data for sequence labeling. In this new architecture, we combine various transfer models using two layers of parameter sharing. On the first layer, we construct the basis of the architecture to provide universal word representation and feature extraction capability for all models. On the second level, we adopt different parameter sharing strategies for different transfer schemes. This architecture proves to be particularly effective for low-resource settings, when there are less than 200 training sentences for the target task. Using Name Tagging as a target task, our approach achieved 4.3%-50.5% absolute Fscore gains compared to the mono-lingual single-task baseline model. 1 1 Introduction When we use supervised learning to solve Natural Language Processing (NLP) problems, we typically train an individual model for each task with task-specific labeled data. However, our target task may be intrinsically linked to other tasks. For example, Part-of-speech (POS) tagging and Name Tagging can both be considered as sequence labeling; Machine Translation (MT) and Abstractive Text Summarization both require the ability to understand the source text and generate natural language sentences. Therefore, it is valuable to transfer knowledge from related tasks to the target task. Multi-task Learning (MTL) is one of ∗* Part of this work was done when the first author was on an internship at Facebook. 1The code of our model is available at https://github. com/limteng-rpi/mlmt the most effective solutions for knowledge transfer across tasks. In the context of neural network architectures, we usually perform MTL by sharing parameters across models (Ruder, 2017). Previous studies (Collobert and Weston, 2008; Dong et al., 2015; Luong et al., 2016; Liu et al., 2018; Yang et al., 2017) have proven that MTL is an effective approach to boost the performance of related tasks such as MT and parsing. However, most of these previous efforts focused on tasks and languages which have sufficient labeled data but hit a performance ceiling on each task alone. Most NLP tasks, including some well-studied ones such as POS tagging, still suffer from the lack of training data for many low-resource languages. According to Ethnologue2, there are 7, 099 living languages in the world. It is an unattainable goal to annotate data in all languages, especially for tasks with complicated annotation requirements. Furthermore, some special applications (e.g., disaster response and recovery) require rapid development of NLP systems for extremely low-resource languages. Therefore, in this paper, we concentrate on enhancing supervised models in low-resource settings by borrowing knowledge learned from related high-resource languages and tasks. In (Yang et al., 2017), the authors simulated a low-resource setting for English and Spanish by downsampling the training data for the target task. However, for most low-resource languages, the data sparsity problem also lies in related tasks and languages. Under such circumstances, a single transfer model can only bring limited improvement. To tackle this issue, we propose a multi-lingual multi-task architecture which combines different transfer models within a unified architecture through two levels of parameter sharing. In the first level, we share character embeddings, 2https://www.ethnologue.com/guides/ how-many-languages 800 character-level convolutional neural networks, and word-level long-short term memory layer across all models. These components serve as a basis to connect multiple models and transfer universal knowledge among them. In the second level, we adopt different sharing strategies for different transfer schemes. For example, we use the same output layer for all Name Tagging tasks to share task-specific knowledge (e.g., I-PER3 should not be assigned to the first word in a sentence). To illustrate our idea, we take sequence labeling as a case study. In the NLP context, the goal of sequence labeling is to assign a categorical label (e.g., POS tag) to each token in a sentence. It underlies a range of fundamental NLP tasks, including POS Tagging, Name Tagging, and chunking. Experiments show that our model can effectively transfer various types of knowledge from different auxiliary tasks and obtains up to 50.5% absolute F-score gains on Name Tagging compared to the mono-lingual single-task baseline. Additionally, our approach does not rely on a large amount of auxiliary task data to achieve the improvement. Using merely 1% auxiliary data, we already obtain up to 9.7% absolute gains in Fscore. 2 Model 2.1 Basic Architecture The goal of sequence labeling is to assign a categorical label to each token in a given sentence. Though traditional methods such as Hidden Markov Models (HMMs) and Conditional Random Fields (CRFs) (Lafferty et al., 2001; Ratinov and Roth, 2009; Passos et al., 2014) achieved high performance on sequence labeling tasks, they typically relied on hand-crafted features, therefore it is difficult to adapt them to new tasks or languages. To avoid task-specific engineering, (Collobert et al., 2011) proposed a feed-forward neural network model that only requires word embeddings trained on a large scale corpus as features. After that, several neural models based on the combination of long-short term memory (LSTM) and CRFs (Ma and Hovy, 2016; Lample et al., 2016; Chiu and Nichols, 2016) were proposed and 3We adopt the BIOES annotation scheme. Prefixes B-, I, E-, and S- represent the beginning of a mention, inside of a mention, the end of a mention and a single-token mention respectively. The O tag is assigned to a word which is not part of any mention. achieved better performance on sequence labeling tasks. Figure 1: LSTM-CNNs: an LSTM-CRFs-based model for Sequence Labeling LSTM-CRFs-based models are well-suited for multi-lingual multi-task learning for three reasons: (1) They learn features from word and character embeddings and therefore require little feature engineering; (2) As the input and output of each layer in a neural network are abstracted as vectors, it is fairly straightforward to share components between neural models; (3) Character embeddings can serve as a bridge to transfer morphological and semantic information between languages with identical or similar scripts, without requiring cross-lingual dictionaries or parallel sentences. Therefore, we design our multi-task multilingual architecture based on the LSTM-CNNs model proposed in (Chiu and Nichols, 2016). The overall framework is illustrated in Figure 1. First, each word wi is represented as the combination xi of two parts, word embedding and character feature vector, which is extracted from character embeddings of the characters in wi using convolutional neural networks (CharCNN). On top of that, a bidirectional LSTM processes the sequence x = {x1, x2, ...} in both directions and encodes each word and its context into a fixed-size vector hi. Next, a linear layer converts hi to a score vector yi, in which each component represents the predicted score of a target tag. In order to model correlations between tags, a CRFs layer is added at the top to generate the best tagging path for the whole sequence. In the CRFs layer, given an input sentence x of length L and the output of the linear layer y, the score of a sequence of tags z is 801 defined as: S(x, y, z) = L ∑ t=1 (Azt−1,zt + yt,zt), where A is a transition matrix in which Ap,q represents the binary score of transitioning from tag p to tag q, and yt,z represents the unary score of assigning tag z to the t-th word. Given the ground truth sequence of tags z, we maximize the following objective function during the training phase: O = log P(z|x) = S(x, y, z) −log ∑ ˜z∈Z eS(x,y,˜z), where Z is the set of all possible tagging paths. We emphasize that our actual implementation differs slightly from the LSTM-CNNs model. We do not use additional word- and characterlevel explicit symbolic features (e.g., capitalization and lexicon) as they may require additional language-specific knowledge. Additionally, we transform character feature vectors using highway networks (Srivastava et al., 2015), which is reported to enhance the overall performance by (Kim et al., 2016) and (Liu et al., 2018). Highway networks is a type of neural network that can smoothly switch its behavior between transforming and carrying information. 2.2 Multi-task Multi-lingual Architecture MTL can be employed to improve performance on multiple tasks at the same time, such as MT and parsing in (Luong et al., 2016). However, in our scenario, we only focused on enhancing the performance of a low-resource task, which is our target task or main task. Our proposed architecture aims to transfer knowledge from a set of auxiliary tasks to the main task. For simplicity, we refer to a model of a main (auxiliary) task as a main (auxiliary) model. To jointly train multiple models, we perform multi-task learning using parameter sharing. Let Θi be the set of parameters for model mi and Θi,j = Θi ∩Θj be the shared parameters between mi and mj. When optimizing model mi, we update Θi and hence Θi,j. In this way, we can partially train model mj as Θi,j ⊆Θj. Previously, each MTL model generally uses a single transfer scheme. In order to merge different transfer models into a unified architecture, we employ two levels of parameter sharing as follows. On the first level, we construct the basis of the architecture by sharing character embeddings, CharCNN and bidirectional LSTM among all models. This level of parameter sharing aims to provide universal word representation and feature extraction capability for all tasks and languages. Character Embeddings and Character-level CNNs. Character features can represent morphological and semantic information; e.g., the English morpheme dis- usually indicates negation and reversal as in “disagree” and “disapproval”. For low-resource languages lacking in data to suffice the training of high-quality word embeddings, character embeddings learned from other languages may provide crucial information for labeling, especially for rare and out-of-vocabulary words. Take the English word “overflying” (flying over) as an example. Even if it is rare or absent in the corpus, we can still infer the word meaning from its suffix over- (above), root fly, and prefix -ing (present participle form). In our architecture, we share character embeddings and the CharCNN between languages with identical or similar scripts to enhance word representation for low-resource languages. Bidirectional LSTM. The bidirectional LSTM layer is essential to extract character, word, and contextual information from a sentence. However, with a large number of parameters, it cannot be fully trained only using the low-resource task data. To tackle this issue, we share the bidirectional LSTM layer across all models. Bear in mind that because our architecture does not require aligned cross-lingual word embeddings, sharing this layer across languages may confuse the model as it equally handles embeddings in different spaces. Nevertheless, under low-resource circumstances, data sparsity is the most critical factor that affects the performance. On top of this basis, we adopt different parameter sharing strategies for different transfer schemes. For cross-task transfer, we use the same word embedding matrix across tasks so that they can mutually enhance word representations. For cross-lingual transfer, we share the linear layer and CRFs layer among languages to transfer taskspecific knowledge, such as the transition score between two tags. Word Embeddings. For most words, in addition to character embeddings, word embeddings are still crucial to represent semantic informa802 Figure 2: Multi-task Multi-lingual Architecture tion. We use the same word embedding matrix for tasks in the same language. The matrix is initialized with pre-trained embeddings and optimized as parameters during training. Thus, task-specific knowledge can be encoded into the word embeddings by one task and subsequently utilized by another one. For a low-resource language even without sufficient raw text, we mix its data with a related high-resource language to train word embeddings. In this way, we merge both corpora and hence their vocabularies. Recently, Conneau et al. (2017) proposed a domain-adversarial method to align two monolingual word embedding matrices without crosslingual supervision such as a bilingual dictionary. Although cross-lingual word embeddings are not required, we evaluate our framework with aligned embeddings generated using this method. Experiment results show that the incorporation of crosslingual embeddings substantially boosts the performance under low-resource settings. Linear Layer and CRFs. As the tag set varies from task to task, the linear layer and CRFs can only be shared across languages. We share these layers to transfer task-specific knowledge to the main model. For example, our model corrects [SPER Charles] [S-PER Picqu´e] to [B-PER Charles] [E-PER Picqu´e] because the CRFs layer fully trained on other languages assigns a low score to the rare transition S-PER→S-PER and promotes B-PER→E-PER. In addition to the shared linear layer, we add an unshared language-specific linear layer to allow the model to behave differently toward some features for different languages. For example, the suffix -ment usually indicates nouns in English whereas indicates adverbs in French. We combine the output of the shared linear layer yu and the output of the language-specific linear layer ys using: y = g ⊙ys + (1 −g) ⊙yu, where g = σ(W gh + bg). W g and bg are optimized during training. h is the LSTM hidden states. As W g is a square matrix, y, ys, and yu have the same dimension. Although we only focus on sequence labeling in this work, our architecture can be adapted for many NLP tasks with slight modification. For example, for text classification tasks, we can take the last hidden state of the forward LSTM as the sentence representation and replace the CRFs layer with a Softmax layer. In our model, each task has a separate object function. To optimize multiple tasks within one model, we adopt the alternating training approach in (Luong et al., 2016). At each training step, we sample a task di with probability ri ∑ j rj , where ri is the mixing rate value assigned to di. In our experiments, instead of tuning ri, we estimate it by: ri = µiζi √ Ni , where µi is the task coefficient, ζi is the language coefficient, and Ni is the number of training examples. µi (or ζi) takes the value 1 if the task 803 (or language) of di is the same as that of the target task; Otherwise it takes the value 0.1. For example, given English Name Tagging as the target task, the task coefficient µ and language coefficient ζ of Spanish Name Tagging are 0.1 and 1 respectively. While assigning lower mixing rate values to auxiliary tasks, this formula also takes the amount of data into consideration. Thus, auxiliary tasks receive higher probabilities to reduce overfitting when we have a smaller amount of main task data. 3 Experiments 3.1 Data Sets For Name Tagging, we use the following data sets: Dutch (NLD) and Spanish (ESP) data from the CoNLL 2002 shared task (Tjong Kim Sang, 2002), English (ENG) data from the CoNLL 2003 shared task (Tjong Kim Sang and De Meulder, 2003), Russian (RUS) data from LDC2016E95 (Russian Representative Language Pack), and Chechen (CHE) data from TAC KBP 2017 10-Language EDL Pilot Evaluation Source Corpus4. We select Chechen as another target language in addition to Dutch and Spanish because it is a truly under-resourced language and its related language, Russian, also lacks NLP resources. Code Train Dev Test NLD 202,931 (13,344) 37,761 (2,616) 68,994 (3,941) ESP 207,484 (18,797) 51,645 (4,351) 52,098 (3,558) ENG 204,567 (23,499) 51,578 (5,942) 46,666 (5,648) RUS 66,333 (3,143) 8,819 (413) 7,771 (407) CHE 98,355 (2,674) 12,265 (312) 11,933 (366) Table 1: Name Tagging data set statistics: #token and #name (between parentheses). For POS Tagging, we use English, Dutch, Spanish, and Russian data from the CoNLL 2017 shared task (Zeman et al., 2017; Nivre et al., 2017). In this data set, each token is annotated with two POS tags, UPOS (universal POS tag) and XPOS (language-specific POS tag). We use UPOS because it is consistent throughout all languages. 3.2 Experimental Setup We use 50-dimensional pre-trained word embeddings and 50-dimensional randomly initialized character embeddings. We train word embeddings using the word2vec package5. English, Span4https://tac.nist.gov/2017/KBP/data.html 5https://github.com/tmikolov/word2vec ish, and Dutch embeddings are trained on corresponding Wikipedia articles (2017-12-20 dumps). Russian embeddings are trained on documents in LDC2016E95. Chechen embeddings are trained on documents in TAC KBP 2017 10-Language EDL Pilot Evaluation Source Corpus. To learn a mapping between mono-lingual word embeddings and obtain cross-lingual embeddings, we use the unsupervised model in the MUSE library6 (Conneau et al., 2017). Although word embeddings are fine-tuned during training, we update the embedding matrix in a sparse way and thus do not have to update a large number of parameters. We optimize parameters using Stochastic Gradient Descent with momentum, gradient clipping and exponential learning rate decay. At step t, the learning rate αt is updated using αt = α0 ∗ρt/T , where α0 is the initial learning rate, ρ is the decay rate, and T is the decay step.7 To reduce overfitting, we apply Dropout (Srivastava et al., 2014) to the output of the LSTM layer. We conduct hyper-parameter optimization by exploring the space of parameters shown in Table 2 using random search (Bergstra and Bengio, 2012). Due to time constraints, we only perform parameter sweeping on the Dutch Name Tagging task with 200 training examples. We select the set of parameters that achieves the best performance on the development set and apply it to all models. Layer Range Final CharCNN Filter Number [10, 30] 20 Highway Layer Number [1, 2] 2 Highway Activation Function ReLU, SeLU SeLU LSTM Hidden State Size [50, 200] 171 LSTM Dropout Rate [0.3, 0.8] 0.6 Learning Rate [0.01, 0.2] 0.02 Batch Size [5, 25] 19 Table 2: Hyper-parameter search space. 3.3 Comparison of Different Models In Figure 3, 4, and 5, we compare our model with the mono-lingual single-task LSTM-CNNs model (denoted as baseline), cross-task transfer model, and cross-lingual transfer model in low-resource settings with Dutch, Spanish, and Chechen Name Tagging as the main task respectively. We use English as the related language for Dutch and Spanish, and use Russian as the related language for 6https://github.com/facebookresearch/MUSE 7Momentum β, gradient clipping threshold, ρ, and T are set to 0.9, 5.0, 0.9, and 10000 in the experiments. 804 Chechen. For cross-task transfer, we take POS Tagging as the auxiliary task. Because the CoNLL 2017 data does not include Chechen, we only use Russian POS Tagging and Russian Name Tagging as auxiliary tasks for Chechen Name Tagging. We take Name Tagging as the target task for three reasons: (1) POS Tagging has a much lower requirement for the amount of training data. For example, using only 10 training sentences, our baseline model achieves 75.5% and 82.9% prediction accuracy on Dutch and Spanish; (2) Compared to POS Tagging, Name Tagging has been considered as a more challenging task; (3) Existing POS Tagging resources are relatively richer than Name Tagging ones; e.g., the CoNLL 2017 data set provides POS Tagging training data for 45 languages. Name Tagging also has a higher annotation cost as its annotation guidelines are usually more complicated. We can see that our model substantially outperforms the mono-lingual single-task baseline model and obtains visible gains over single transfer models. When trained with less than 50 main tasks training sentences, cross-lingual transfer consistently surpasses cross-task transfer, which is not surprising because in the latter scheme, the linear layer and CRFs layer of the main model are not shared with other models and thus cannot be fully trained with little data. Because there are only 20,400 sentences in Chechen documents, we also experiment with the data augmentation method described in Section 2.2 by training word embeddings on a mixture of Russian and Chechen data. This method yields additional 3.5%-10.0% absolute F-score gains. We also experiment with transferring from English to Chechen. Because Chechen uses Cyrillic alphabet , we convert its data set to Latin script. Surprisingly, although these two languages are not close, we get more improvement by using English as the auxiliary language. In Table 3, we compare our model with state-ofthe-art models using all Dutch or Spanish Name Tagging data. Results show that although we design this architecture for low-resource settings, it also achieves good performance in high-resource settings. In this experiment, with sufficient training data for the target task, we perform another round of parameter sweeping. We increase the embedding sizes and LSTM hidden state size to 100 and 225 respectively. 0 10 20 30 40 50 100 200 #Duch Name Tagging Training Sentences 0 10 20 30 40 50 60 70 F-score (%) Baseline Cross-task Cross-lingual Our Model Our Model* Figure 3: Performance on Dutch Name Tagging. We scale the horizontal axis to show more details under 100 sentences. Our Model*: our model with MUSE cross-lingual embeddings. 0 10 20 30 40 50 100 200 #Spanish Name Tagging Training Sentences 0 20 40 60 80 F-score (%) Baseline Cross-task Cross-lingual Our Model Our Model* Figure 4: Performance on Spanish Name Tagging. 0 10 20 30 40 50 100 200 #Chechen Name Tagging Training Sentences 0 10 20 30 40 50 F-score (%) Baseline Cross-lingual Our Model Our Model + Mixed Raw Data Our Model (Auxiliary language: English) Figure 5: Performance on Chechen Name Tagging. 3.4 Qualitative Analysis In Table 4, we compare Name Tagging results from the baseline model and our model, both trained with 100 main task sentences. The first three examples show that shared character-level networks can transfer different levels of morphological and semantic information. 805 Language Model F-score Dutch Gillick et al. (2016) 82.84 Lample et al. (2016) 81.74 Yang et al. (2017) 85.19 Baseline 85.14 Cross-task 85.69 Cross-lingual 85.71 Our Model 86.55 Spanish Gillick et al. (2016) 82.95 Lample et al. (2016) 85.75 Yang et al. (2017) 85.77 Baseline 85.44 Cross-task 85.37 Cross-lingual 85.02 Our Model 85.88 Table 3: Comparison with state-of-the-art models. In example #1, the baseline model fails to identify “Palestijnen”, an unseen word in the Dutch data, while our model can recognize it because the shared CharCNN represents it in a way similar to its corresponding English word “Palestinians”, which occurs 20 times. In addition to mentions, the shared CharCNN can also improve representations of context words, such as “staat” (state) in the example. For some words dissimilar to corresponding English words, the CharCNN may enhance their word representations by transferring morpheme-level knowledge. For example, in sentence #2, our model is able to identify “Rusland” (Russia) as the suffix -land is usually associated with location names in the English data; e.g., Finland. Furthermore, the CharCNN is capable of capturing some word-level patterns, such as capitalized hyphenated compound and acronym as example #3 shows. In this sentence, neither “PMScentra” nor “MST” can be found in auxiliary task data, while we observe a number of similar expressions, such as American-style and LDP. The transferred knowledge also helps reduce overfitting. For example, in sentence #4, the baseline model mistakenly tags “secci´on” (section) and “conseller´ıa” (department) as organizations because their capitalized forms usually appear in Spanish organization names. With knowledge learned in auxiliary tasks that a lowercased word is rarely tagged as a proper noun, our model is able to avoid overfitting and correct these errors. Sentence #5 shows an opposite situation, where the capitalized word “campesinos” (farm worker) never appears in Spanish names. In Table 5, we show differences between crosslingual transfer and cross-task transfer. Although the cross-task transfer model recognizes “Ingeborg Marx” missed by the baseline model, it mistakenly assigns an S-PER tag to “Marx”. Instead, from English Name Tagging, the cross-lingual transfer model borrows task-specific knowledge through the shared CRFs layer that (1) B-PER→SPER is an invalid transition, and (2) even if we assign S-PER to “Ingeborg”, it is rare to have continuous person names without any conjunction or punctuation. Thus, the cross-lingual model promotes the sequence B-PER→E-PER. In Figure 6, we depict the change of tag distribution with the number of training sentences. When trained with less than 100 sentences, the baseline model only correctly predicts a few tags dominated by frequent types. By contrast, our model has a visibly higher recall and better predicts infrequent tags, which can be attributed to the implicit data augmentation and inductive bias introduced by MTL (Ruder, 2017). For example, if all location names in the Dutch training data are single-token ones, the baseline model will inevitably overfit to the tag S-LOC and possibly label “Caldera de Taburiente” as [S-LOC Caldera] [S-LOC de] [S-LOC Taburiente], whereas with the shared CRFs layer fully trained on English Name Tagging, our model prefers B-LOC→I-LOC→ELOC, which receives a higher transition score. 0 10 20 30 40 50 100 200 500 all 0 2k 4k 0 10 20 30 40 50 100 200 500 all #Dutch Name Tagging Training Sentences 0 2k 4k #Correctly Predicted Tags Baseline Our Model Figure 6: The distribution of correctly predicted tags on Dutch Name Tagging. The height of each stack indicates the number of a certain tag. 3.5 Ablation Studies In order to quantify the contributions of individual components, we conduct ablation studies on Dutch Name Tagging with different numbers of training sentences for the target task. For the basic model, we we use separate LSTM layers and 806 #1 [DUTCH]: If a Palestinian State is, however, the first thing the Palestinians will do. ⋆[B] Als er een Palestijnse staat komt, is dat echter het eerste wat de Palestijnen zullen doen ⋆[A] Als er een [S-MISC Palestijnse] staat komt, is dat echter het eerste wat de [S-MISC Palestijnen] zullen doen #2 [DUTCH]: That also frustrates the Muscovites, who still live in the proud capital of Russia but can not look at the soaps that the stupid farmers can see on the outside. ⋆[B] Ook dat frustreert de Moskovieten , die toch in de fiere hoofdstad van Rusland wonen maar niet naar de soaps kunnen kijken die de domme boeren op de buiten wel kunnen zien ⋆[A] Ook dat frustreert de [S-MISC Moskovieten] , die toch in de fiere hoofdstad van [S-LOC Rusland] wonen maar niet naar de soaps kunnen kijken die de domme boeren op de buiten wel kunnen zien #3 [DUTCH]: And the PMS centers are merging with the centers for school supervision, the MSTs. ⋆[B] En smelten de PMS-centra samen met de centra voor schooltoezicht, de MST’s . ⋆[A] En smelten de [S-MISC PMS-centra] samen met de centra voor schooltoezicht, de [S-MISC MST’s] . #4 [SPANISH]: The trade union section of CC.OO. in the Department of Justice has today denounced more attacks of students to educators in centers dependent on this department ... ⋆[B] La [B-ORG secci´on] [I-ORG sindical] [I-ORG de] [S-ORG CC.OO.] en el [B-ORG Departamento] [I-ORG de] [E-ORG Justicia] ha denunciado hoy ms agresiones de alumnos a educadores en centros dependientes de esta [S-ORG conseller´ıa] ... ⋆[A] La secci´on sindical de [S-ORG CC.OO.] en el [B-ORG Departamento] [I-ORG de] [E-ORG Justicia] ha denunciado hoy ms agresiones de alumnos a educadores en centros dependientes de esta conseller´ıa ... #5 [SPANISH]: ... and the Single Trade Union Confederation of Peasant Workers of Bolivia, agreed upon when the state of siege was ended last month. ⋆[B] ... y la [B-ORG Confederaci´on] [I-ORG Sindical] [I-ORG Unica] [I-ORG de] [E-ORG Trabajadores] Campesinos de [S-ORG Bolivia] , pactadas cuando se dio fin al estado de sitio, el mes pasado . ⋆[A] .. y la [B-ORG Confederaci´on] [I-ORG Sindical] [I-ORG Unica] [I-ORG de] [I-ORG Trabajadores] [I-ORG Campesinos] [I-ORG de] [E-ORG Bolivia] , pactadas cuando se dio fin al estado de sitio, el mes pasado . Table 4: Name Tagging results, each of which contains an English translation, result of the baseline model (B), and result of our model (A). The GREEN ( RED ) highlight indicates a correct (incorrect) tag. [DUTCH] ... Ingeborg Marx is her name, a formidable heavy weight to high above her head! ⋆[B] ... Zag ik zelfs onlangs niet dat een lief, mooi vrouwtje, Ingeborg Marx is haar naam, een formidabel zwaar gewicht tot hoog boven haar hoofd stak! ⋆[CROSS-TASK] ... Zag ik zelfs onlangs niet dat een lief, mooi vrouwtje, [B-PER Ingeborg] [S-PER Marx] is haar naam, een formidabel zwaar gewicht tot hoog boven haar hoofd stak! ⋆[CROSS-LINGUAL] ... Zag ik zelfs onlangs niet dat een lief, mooi vrouwtje, [B-PER Ingeborg] [E-PER Marx] is haar naam, een formidabel zwaar gewicht tot hoog boven haar hoofd stak! Table 5: Comparing cross-task transfer and crosslingual transfer on Dutch Name Tagging with 100 training sentences. remove the character embeddings, highway networks, language-specific layer, and Dropout layer. As Table 6 shows, adding each component usually enhances the performance (F-score, %), while the impact also depends on the size of the target task data. For example, the language-specific layer slightly impairs the performance with only 10 training sentences. However, this is unsurprising as it introduces additional parameters that are only trained by the target task data. Model 0 10 100 200 All Basic 2.06 20.03 47.98 51.52 77.63 +C 1.69 24.22 48.53 56.26 83.38 +CL 9.62 25.97 49.54 56.29 83.37 +CLS 3.21 25.43 50.67 56.34 84.02 +CLSH 7.70 30.48 53.73 58.09 84.68 +CLSHD 12.12 35.82 57.33 63.27 86.00 Table 6: Performance comparison between models with different components (C: character embedding; L: shared LSTM; S: language-specific layer; H: highway networks; D: dropout). 3.6 Effect of the Amount of Auxiliary Task Data For many low-resource languages, their related languages are also low-resource. To evaluate our model’s sensitivity to the amount of auxiliary task data, we fix the size of main task data and downsample all auxiliary task data with sample rates from 1% to 50%. As Figure 7 shows, the performance goes up when we raise the sample rate from 807 1% to 20%. However, we do not observe significant improvement when we further increase the sample rate. By comparing scores in Figure 3 and Figure 7, we can see that using only 1% auxiliary data, our model already obtains 3.7%-9.7% absolute F-score gains. Due to space limitations, we only show curves for Dutch Name Tagging, while we observe similar results on other tasks. Therefore, we may conclude that our model does not heavily rely on the amount of auxiliary task data. 0 0.2 0.4 0.6 0.8 1 Sample Rate for Auxiliary Task Data 0 20 40 60 F-score (%) 10 Training Sentences 50 Training Sentences 200 Training Sentences Figure 7: The effect of the amount of auxiliary task data on Dutch Name Tagging. 4 Related Work Multi-task Learning has been applied in different NLP areas, such as machine translation (Luong et al., 2016; Dong et al., 2015; Domhan and Hieber, 2017), text classification (Liu et al., 2017), dependency parsing (Peng et al., 2017), textual entailment (Hashimoto et al., 2017), text summarization (Isonuma et al., 2017) and sequence labeling (Collobert and Weston, 2008; Søgaard and Goldberg, 2016; Rei, 2017; Peng and Dredze, 2017; Yang et al., 2017; von D¨aniken and Cieliebak, 2017; Aguilar et al., 2017; Liu et al., 2018) Collobert and Weston (2008) is an early attempt that applies MTL to sequence labeling. The authors train a CNN model jointly on POS Tagging, Semantic Role Labeling, Name Tagging, chunking, and language modeling using parameter sharing. Instead of using other sequence labeling tasks, Rei (2017) and Liu et al. (2018) take language modeling as the secondary training objective to extract semantic and syntactic knowledge from large scale raw text without additional supervision. In (Yang et al., 2017), the authors propose three transfer models for crossdomain, cross-application, and cross-lingual transfer for sequence labeling, and also simulate a lowresource setting by downsampling the training data. By contrast, we combine cross-task transfer and cross-lingual transfer within a unified architecture to transfer different types of knowledge from multiple auxiliary tasks simultaneously. In addition, because our model is designed for lowresource settings, we share components among models in a different way (e.g., the LSTM layer is shared across all models). Differing from most MTL models, which perform supervisions for all tasks on the outermost layer, (Søgaard and Goldberg, 2016) proposes an MTL model which supervised tasks at different levels. It shows that supervising low-level tasks such as POS Tagging at lower layer obtains better performance. 5 Conclusions and Future Work We design a multi-lingual multi-task architecture for low-resource settings. We evaluate the model on sequence labeling tasks with three language pairs. Experiments show that our model can effectively transfer different types of knowledge to improve the main model. It substantially outperforms the mono-lingual single-task baseline model, cross-lingual transfer model, and crosstask transfer model. The next step of this research is to apply this architecture to other types of tasks, such as Event Extract and Semantic Role Labeling that involve structure prediction. We also plan to explore the possibility of integrating incremental learning into this architecture to adapt a trained model for new tasks rapidly. Acknowledgments This work was supported by the U.S. DARPA LORELEI Program No. HR0011-15-C-0115 and U.S. ARL NS-CTA No. W911NF-09-2-0053. The views and conclusions contained in this document are those of the authors and should not be interpreted as representing the official policies, either expressed or implied, of the U.S. Government. The U.S. Government is authorized to reproduce and distribute reprints for Government purposes notwithstanding any copyright notation here on. References Gustavo Aguilar, Suraj Maharjan, Adrian Pastor L´opez Monroy, and Thamar Solorio. 2017. A multi-task 808 approach for named entity recognition in social media data. In Proceedings of the 3rd Workshop on Noisy User-generated Text. James Bergstra and Yoshua Bengio. 2012. Random search for hyper-parameter optimization. Journal of Machine Learning Research, 13(Feb):281–305. Jason P. C. Chiu and Eric Nichols. 2016. Named entity recognition with bidirectional LSTM-CNNs. TACL, 4:357–370. Ronan Collobert and Jason Weston. 2008. A unified architecture for natural language processing: Deep neural networks with multitask learning. In ICML. Ronan Collobert, Jason Weston, L´eon Bottou, Michael Karlen, Koray Kavukcuoglu, and Pavel Kuksa. 2011. Natural language processing (almost) from scratch. Journal of Machine Learning Research, 12(Aug):2493–2537. Alexis Conneau, Guillaume Lample, Marc’Aurelio Ranzato, Ludovic Denoyer, and Herv´e J´egou. 2017. Word translation without parallel data. arXiv preprint arXiv:1710.04087. Pius von D¨aniken and Mark Cieliebak. 2017. Transfer learning and sentence level features for named entity recognition on tweets. In Proceedings of the 3rd Workshop on Noisy User-generated Text. Tobias Domhan and Felix Hieber. 2017. Using targetside monolingual data for neural machine translation through multi-task learning. In EMNLP. Daxiang Dong, Hua Wu, Wei He, Dianhai Yu, and Haifeng Wang. 2015. Multi-task learning for multiple language translation. In ACL. Dan Gillick, Cliff Brunk, Oriol Vinyals, and Amarnag Subramanya. 2016. Multilingual language processing from bytes. In NAACL HLT. Kazuma Hashimoto, Yoshimasa Tsuruoka, Richard Socher, et al. 2017. A joint many-task model: Growing a neural network for multiple nlp tasks. In EMNLP. Masaru Isonuma, Toru Fujino, Junichiro Mori, Yutaka Matsuo, and Ichiro Sakata. 2017. Extractive summarization using multi-task learning with document classification. In EMNLP. Yoon Kim, Yacine Jernite, David Sontag, and Alexander M Rush. 2016. Character-aware neural language models. In AAAI. John D. Lafferty, Andrew McCallum, and Fernando C. N. Pereira. 2001. Conditional random fields: Probabilistic models for segmenting and labeling sequence data. In ICML. Guillaume Lample, Miguel Ballesteros, Sandeep Subramanian, Kazuya Kawakami, and Chris Dyer. 2016. Neural architectures for named entity recognition. In NAACL HLT. Liyuan Liu, Jingbo Shang, Frank Xu, Xiang Ren, Huan Gui, Jian Peng, and Jiawei Han. 2018. Empower sequence labeling with task-aware neural language model. In AAAI. Pengfei Liu, Xipeng Qiu, and Xuanjing Huang. 2017. Adversarial multi-task learning for text classification. In ACL. Minh-Thang Luong, Quoc V Le, Ilya Sutskever, Oriol Vinyals, and Lukasz Kaiser. 2016. Multi-task sequence to sequence learning. In ICLR. Xuezhe Ma and Eduard Hovy. 2016. End-to-end sequence labeling via bi-directional LSTM-CNNsCRF. In ACL. Joakim Nivre, ˇZeljko Agi´c, Lars Ahrenberg, Lene Antonsen, Maria Jesus Aranzabe, Masayuki Asahara, Luma Ateyah, Mohammed Attia, Aitziber Atutxa, Liesbeth Augustinus, Elena Badmaeva, Miguel Ballesteros, Esha Banerjee, Sebastian Bank, Verginica Barbu Mititelu, John Bauer, Kepa Bengoetxea, Riyaz Ahmad Bhat, Eckhard Bick, Victoria Bobicev, Carl B¨orstell, Cristina Bosco, Gosse Bouma, Sam Bowman, Aljoscha Burchardt, Marie Candito, Gauthier Caron, G¨uls¸en Cebiro˘glu Eryi˘git, Giuseppe G. A. Celano, Savas Cetin, Fabricio Chalub, Jinho Choi, Silvie Cinkov´a, C¸ a˘grı C¸ ¨oltekin, Miriam Connor, Elizabeth Davidson, Marie-Catherine de Marneffe, Valeria de Paiva, Arantza Diaz de Ilarraza, Peter Dirix, Kaja Dobrovoljc, Timothy Dozat, Kira Droganova, Puneet Dwivedi, Marhaba Eli, Ali Elkahky, Tomaˇz Erjavec, Rich´ard Farkas, Hector Fernandez Alcalde, Jennifer Foster, Cl´audia Freitas, Katar´ına Gajdoˇsov´a, Daniel Galbraith, Marcos Garcia, Moa G¨ardenfors, Kim Gerdes, Filip Ginter, Iakes Goenaga, Koldo Gojenola, Memduh G¨okırmak, Yoav Goldberg, Xavier G´omez Guinovart, Berta Gonz´ales Saavedra, Matias Grioni, Normunds Gr¯uz¯ıtis, Bruno Guillaume, Nizar Habash, Jan Hajiˇc, Jan Hajiˇc jr., Linh H`a M˜y, Kim Harris, Dag Haug, Barbora Hladk´a, Jaroslava Hlav´aˇcov´a, Florinel Hociung, Petter Hohle, Radu Ion, Elena Irimia, Tom´aˇs Jel´ınek, Anders Johannsen, Fredrik Jørgensen, H¨uner Kas¸ıkara, Hiroshi Kanayama, Jenna Kanerva, Tolga Kayadelen, V´aclava Kettnerov´a, Jesse Kirchner, Natalia Kotsyba, Simon Krek, Veronika Laippala, Lorenzo Lambertino, Tatiana Lando, John Lee, Phng Lˆe H`ˆong, Alessandro Lenci, Saran Lertpradit, Herman Leung, Cheuk Ying Li, Josie Li, Keying Li, Nikola Ljubeˇsi´c, Olga Loginova, Olga Lyashevskaya, Teresa Lynn, Vivien Macketanz, Aibek Makazhanov, Michael Mandl, Christopher Manning, C˘at˘alina M˘ar˘anduc, David Mareˇcek, Katrin Marheinecke, H´ector Mart´ınez Alonso, Andr´e Martins, Jan Maˇsek, Yuji Matsumoto, Ryan McDonald, Gustavo Mendonc¸a, Niko Miekka, Anna Missil¨a, C˘at˘alin Mititelu, Yusuke Miyao, Simonetta Montemagni, Amir More, Laura Moreno Romero, Shinsuke Mori, Bohdan Moskalevskyi, Kadri Muischnek, Kaili M¨u¨urisep, Pinkey Nainwani, Anna Nedoluzhko, Gunta Neˇspore-B¯erzkalne, Lng 809 Nguy˜ˆen Thi., Huy`ˆen Nguy˜ˆen Thi. Minh, Vitaly Nikolaev, Hanna Nurmi, Stina Ojala, Petya Osenova, Robert ¨Ostling, Lilja Øvrelid, Elena Pascual, Marco Passarotti, Cenel-Augusto Perez, Guy Perrier, Slav Petrov, Jussi Piitulainen, Emily Pitler, Barbara Plank, Martin Popel, Lauma Pretkalnin¸a, Prokopis Prokopidis, Tiina Puolakainen, Sampo Pyysalo, Alexandre Rademaker, Loganathan Ramasamy, Taraka Rama, Vinit Ravishankar, Livy Real, Siva Reddy, Georg Rehm, Larissa Rinaldi, Laura Rituma, Mykhailo Romanenko, Rudolf Rosa, Davide Rovati, Benoˆıt Sagot, Shadi Saleh, Tanja Samardˇzi´c, Manuela Sanguinetti, Baiba Saul¯ıte, Sebastian Schuster, Djam´e Seddah, Wolfgang Seeker, Mojgan Seraji, Mo Shen, Atsuko Shimada, Dmitry Sichinava, Natalia Silveira, Maria Simi, Radu Simionescu, Katalin Simk´o, M´aria ˇSimkov´a, Kiril Simov, Aaron Smith, Antonio Stella, Milan Straka, Jana Strnadov´a, Alane Suhr, Umut Sulubacak, Zsolt Sz´ant´o, Dima Taji, Takaaki Tanaka, Trond Trosterud, Anna Trukhina, Reut Tsarfaty, Francis Tyers, Sumire Uematsu, Zdeˇnka Ureˇsov´a, Larraitz Uria, Hans Uszkoreit, Sowmya Vajjala, Daniel van Niekerk, Gertjan van Noord, Viktor Varga, Eric Villemonte de la Clergerie, Veronika Vincze, Lars Wallin, Jonathan North Washington, Mats Wir´en, Tak-sum Wong, Zhuoran Yu, Zdenˇek ˇZabokrtsk´y, Amir Zeldes, Daniel Zeman, and Hanzhi Zhu. 2017. Universal dependencies 2.1. LINDAT/CLARIN digital library at the Institute of Formal and Applied Linguistics ( ´UFAL), Faculty of Mathematics and Physics, Charles University. Alexandre Passos, Vineet Kumar, and Andrew McCallum. 2014. Lexicon infused phrase embeddings for named entity resolution. In CoNLL. Hao Peng, Sam Thomson, and Noah A Smith. 2017. Deep multitask learning for semantic dependency parsing. In ACL. Nanyun Peng and Mark Dredze. 2017. Multi-task domain adaptation for sequence tagging. In Proceedings of the 2nd Workshop on Representation Learning for NLP. Lev Ratinov and Dan Roth. 2009. Design challenges and misconceptions in named entity recognition. In CoNLL. Marek Rei. 2017. Semi-supervised multitask learning for sequence labeling. In ACL. Sebastian Ruder. 2017. An overview of multi-task learning in deep neural networks. arXiv preprint arXiv:1706.05098. Anders Søgaard and Yoav Goldberg. 2016. Deep multi-task learning with low level tasks supervised at lower layers. In ACL. Nitish Srivastava, Geoffrey Hinton, Alex Krizhevsky, Ilya Sutskever, and Ruslan Salakhutdinov. 2014. Dropout: A simple way to prevent neural networks from overfitting. The Journal of Machine Learning Research, 15(1):1929–1958. Rupesh Kumar Srivastava, Klaus Greff, and J¨urgen Schmidhuber. 2015. Highway networks. ICML. Erik F. Tjong Kim Sang. 2002. Introduction to the CoNLL-2002 shared task: Language-independent named entity recognition. In COLING. Erik F. Tjong Kim Sang and Fien De Meulder. 2003. Introduction to the CoNLL-2003 shared task: Language-independent named entity recognition. In NAACL HLT. Zhilin Yang, Ruslan Salakhutdinov, and William W Cohen. 2017. Transfer learning for sequence tagging with hierarchical recurrent networks. In ICLR. Daniel Zeman, Martin Popel, Milan Straka, Jan Hajic, Joakim Nivre, Filip Ginter, Juhani Luotolahti, Sampo Pyysalo, Slav Petrov, Martin Potthast, et al. 2017. CoNLL 2017 shared task: multilingual parsing from raw text to universal dependencies. In CoNLL.
2018
74
Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Long Papers), pages 810–820 Melbourne, Australia, July 15 - 20, 2018. c⃝2018 Association for Computational Linguistics 810 Two Methods for Domain Adaptation of Bilingual Tasks: Delightfully Simple and Broadly Applicable Viktor Hangya1, Fabienne Braune1,2, Alexander Fraser1, Hinrich Sch¨utze1 1Center for Information and Language Processing LMU Munich, Germany 2Volkswagen Data Lab Munich, Germany {hangyav, fraser}@cis.uni-muenchen.de [email protected] Abstract Bilingual tasks, such as bilingual lexicon induction and cross-lingual classification, are crucial for overcoming data sparsity in the target language. Resources required for such tasks are often out-of-domain, thus domain adaptation is an important problem here. We make two contributions. First, we test a delightfully simple method for domain adaptation of bilingual word embeddings. We evaluate these embeddings on two bilingual tasks involving different domains: cross-lingual twitter sentiment classification and medical bilingual lexicon induction. Second, we tailor a broadly applicable semi-supervised classification method from computer vision to these tasks. We show that this method also helps in low-resource setups. Using both methods together we achieve large improvements over our baselines, by using only additional unlabeled data. 1 Introduction In this paper we study two bilingual tasks that strongly depend on bilingual word embeddings (BWEs). Previously, specialized domain adaptation approaches to such tasks were proposed. We instead show experimentally that a simple adaptation process involving only unlabeled text is highly effective. We then show that a semisupervised classification method from computer vision can be applied successfully for further gains in cross-lingual classification. Our BWE adaptation method is delightfully simple. We begin by adapting monolingual word embeddings to the target domain for source and target languages by simply building them using both general and target-domain unlabeled data. As a second step we use post-hoc mapping (Mikolov et al., 2013b), i.e., we use a seed lexicon to transform the word embeddings of the two languages into the same vector space. We show experimentally for the first time that the domain-adapted bilingual word embeddings we produce using this extremely simple technique are highly effective. We study two quite different tasks and domains, where resources are lacking, showing that our simple technique performs well for both of them: cross-lingual twitter sentiment classification and medical bilingual lexicon induction. In previous work, task-dependent approaches were used for this type of domain adaptation. Our approach is simple and task independent. Second, we adapt the semi-supervised image classification system of H¨ausser et al. (2017) for NLP problems for the first time. This approach is broadly applicable to many NLP classification tasks where unlabeled data is available. We tailor it to both of our cross-lingual tasks. The system exploits unlabeled data during the training of classifiers by learning similar features for similar labeled and unlabeled training examples, thereby extracting information from unlabeled examples as well. As we show experimentally, the system further improves cross-lingual knowledge transfer for both of our tasks. After combining both techniques, the results of sentiment analysis are competitive with systems that use annotated data in the target language, an impressive result considering that we require no target-language annotated data. The method also yields impressive improvements for bilingual lexicon induction compared with baselines trained on in-domain data. We show that this system requires the high-quality domain-adapted bilingual word embeddings we previously created to use unlabeled data well. 811 2 Previous Work 2.1 Bilingual Word Embeddings Many approaches have been proposed for creating high quality BWEs using different bilingual signals. Following Mikolov et al. (2013b), many authors (Faruqui and Dyer, 2014; Xing et al., 2015; Lazaridou et al., 2015; Vuli´c and Korhonen, 2016) map monolingual word embeddings (MWEs) into the same bilingual space. Others leverage parallel texts (Hermann and Blunsom, 2014; Gouws et al., 2015) or create artificial cross-lingual corpora using seed lexicons or document alignments (Vuli´c and Moens, 2015; Duong et al., 2016) to train BWEs. In contrast, our aim is not to improve the intrinsic quality of BWEs, but to adapt BWEs to specific domains to enhance their performance on bilingual tasks in these domains. Faruqui et al. (2015), Gouws and Søgaard (2015), Rothe et al. (2016) have previously studied domain adaptation of bilingual word embeddings, showing it to be highly effective for improving downstream tasks. However, importantly, their proposed methods are based on specialized domain lexicons (such as, e.g., sentiment lexicons) which contain task specific word relations. Our delightfully simple approach is, in contrast, effectively task independent (in that it only requires unlabeled in-domain text), which is an important strength. 2.2 Cross-Lingual Sentiment Analysis Sentiment analysis is widely applied, and thus ideally we would have access to high quality supervised models in all human languages. Unfortunately, good quality labeled datasets are missing for many languages. Training models on resource rich languages and applying them to resource poor languages is therefore highly desirable. Crosslingual sentiment classification (CLSC) tackles this problem (Mihalcea et al., 2007; Banea et al., 2010; Wan, 2009; Lu et al., 2011; Balamurali and Joshi, 2012; Gui et al., 2013). Recent CLSC approaches use BWEs as features of deep learning architectures which allows us to use a model for target-language sentiment classification, even when the model was trained only using sourcelanguage supervised training data. Following this approach we perform CLSC on Spanish tweets using English training data. Even though Spanish is not resource-poor we simulate this by using only English annotated data. Xiao and Guo (2013) proposed a cross-lingual log-bilinear document model to learn distributed representations of words, which can capture both the semantic similarities of words across languages and the predictive information with respect to the classification task. Similarly, Tang and Wan (2014) jointly embedded texts in different languages into a joint semantic space representing sentiment. Zhou et al. (2014) employed aligned sentences in the BWE learning process, but in the sentiment classification process only representations in the source language are used for training, and representations in the target language are used for predicting labels. An important weakness of these three works was that aligned sentences were required. Some work has trained sentiment-specific BWEs using annotated sentiment information in both languages (Zhou et al., 2015, 2016), which is desirable, but this is not applicable to our scenario. Our goal is to adapt BWEs to a specific domain without requiring additional task-specific engineering or knowledge sources beyond having access to plentiful target-language in-domain unlabeled text. Both of the approaches we study in this work fit this criterion, the delightfully simple method for adapting BWEs can improve the performance of any off-the-shelf classifier that is based on BWEs, while the broadly applicable semi-supervised approach of H¨ausser et al. (2017) can improve the performance of any off-the-shelf classifier. 2.3 Bilingual Lexicon Induction (BLI) BLI is an important task that has been addressed by a large amount of previous work. The goal of BLI is to automatically extract word translation pairs using BWEs. While BLI is often used to provide an intrinsic evaluation of BWEs (Lazaridou et al., 2015; Vuli´c and Moens, 2015; Vuli´c and Korhonen, 2016) it is also useful for tasks such as machine translation (Madhyastha and Espa˜na Bohnet, 2017). Most work on BLI using BWEs focuses on frequent words in high-resource domains such as parliament proceedings or news texts. Recently Heyman et al. (2017) tackled BLI of words in the medical domain. This task is useful for many applications such as terminology extraction or OOV mining for machine translation of medical texts. Heyman et al. (2017) show that when only a small amount of medical data is available, 812 BLI using BWEs tends to perform poorly. Especially BWEs obtained using post-hoc mapping (Mikolov et al., 2013b; Lazaridou et al., 2015) fail on this task. Consequently, Heyman et al. (2017) build BWEs using aligned documents and then engineer a specialized classification-based approach to BLI. In contrast, our delightfully simple approach to create high-quality BWEs for the medical domain requires only monolingual data. We show that our adapted BWEs yield impressive improvements over non-adapted BWEs in this task with both cosine similarity and with the classifier of Heyman et al. (2017). In addition, we show that the broadly applicable method can push performance further using easily accessible unlabeled data. 3 Adaptation of BWEs BWEs trained on general domain texts usually result in lower performance when used in a system for a specific domain. There are two reasons for this. (i) Vocabularies of specific domains contain words that are not used in the general case, e.g., names of medicines or diseases. (ii) The meaning of a word varies across domains; e.g., “apple” mostly refers to a fruit in general domains, but is an electronic device in many product reviews. The delightfully simple method adapts general domain BWEs in a way that preserves the semantic knowledge from general domain data and leverages monolingual domain specific data to create domain-specific BWEs. Our domain-adaptation approach is applicable to any language-pair in which monolingual data is available. Unlike other methods, our approach is task independent: it only requires unlabeled in-domain target language text. 3.1 Approach To create domain adapted BWEs, we first train MWEs (monolingual word embeddings) in both languages and then map those into the same space using post-hoc mapping (Mikolov et al., 2013b). We train MWEs for both languages by concatenating monolingual out-of-domain and in-domain data. The out-of-domain data allows us to create accurate distributed representations of common vocabulary while the in-domain data embeds domain specific words. We then map the two MWEs using a small seed lexicon to create the adapted BWEs. Because post-hoc mapping only requires a seed lexicon as bilingual signal it can easily be used with (cheap) monolingual data. For post-hoc mapping, we use Mikolov et al. (2013b)’s approach. This model assumes a W ∈ Rd1×d2 matrix which maps vectors from the source to the target MWEs where d1 and d2 are the embedding space dimensions. A seed lexicon of (xi, yi) ∈L ⊆Rd1 ×Rd2 pairs is needed where xi and yi are source and target MWEs. W can be learned using ridge regression by minimizing the L2-regularized mapping error between the source xi and the target yi vectors: min W X i ||Wxi −yi||2 2 + λ||W||2 2 (1) where λ is the regularization weight. Based on the source embedding x, we then compute a target embedding as Wx. We create MWEs with word2vec skipgram (Mikolov et al., 2013a)1 and estimate W with scikit-learn (Pedregosa et al., 2011). We use default parameters. 4 Cross-Lingual Sentiment Classification In CLSC, an important application of BWEs, we train a supervised sentiment model on training data available in the source (a resource rich language) and apply it to the target (a resource poor language, for which there is typically no training data available). Because BWEs embed source and target words in the same space, annotations in the source (represented as BWEs) enable transfer learning. For CLSC of tweets, a drawback of BWEs trained on non-twitter data is that they do not produce embeddings for twitter-specific vocabulary, e.g., slang words like English coool and (Mexican) Spanish chido, resulting in lost information when a sentiment classifier uses them. 4.1 Training Data for Twitter Specific BWEs As comparable non-twitter data we use OpenSubtitles (Lison and Tiedemann, 2016) which contains 49.2M English and Spanish subtitle sentences respectively (Subtitle). The reason behind choosing Subtitles is that although it is out-of-domain it contains slang words similar to tweets thus serving as a strong baseline in our setup. We experiment with two monolingual twitter data sets: (i) 22M tweets: Downloaded2 English (17.2M) and Spanish (4.8M) tweets using the public 1https://github.com/dav/word2vec 2We downloaded for a month starting on 2016-10-15. 813 Twitter Streaming API3 with language filters en and es (ii) a BACKGROUND corpus of 296K English and 150K Spanish (non-annotated) tweets released with the test data of the RepLab task (Amig´o et al., 2013) described below All twitter data was tokenized using Bird et al. (2009) and lowercased. User names, URLs, numbers, emoticons and punctuation were removed. As lexicon for the mapping, we use the BNC word frequency list (Kilgarriff, 1997), a list of 6,318 frequent English lemmas and their Spanish translations, obtained from Google Translate. Note that we do not need a domain-specific lexicon in order to get good quality adapted BWEs. 4.2 Training Data for Sentiment Classifiers For sentiment classification, we use data from the RepLab 2013 shared task (Amig´o et al., 2013). The data is annotated with positive, neutral and negative labels and contains English and Spanish tweets. We used the official English training set (26.6K tweets) and the Spanish test set (14.9K) in the resource-poor setup. We only use the 7.2K Spanish labeled training data for comparison reasons in §6.2, which we will discuss later. The shared task was on target-level sentiment analysis, i.e., given a pair (document, target entity), the gold annotation is based on whether the sentiment expressed by the document is about the target. For example: I cried on the back seat of my BMW! where BMW is the target would be negative in the sentence-level scenario. However, it is neutral in the target-level case because the negative sentiment is not related to BMW. The reason for using this dataset is that it contains comparable English and Spanish tweets annotated for sentiment. There are other twitter datasets for English (Nakov et al., 2016) and Spanish (GarcıaCumbreras et al., 2016), but they were downloaded at different times and were annotated using different annotation methodologies, thus impeding a clean and consistent evaluation. 4.3 Sentiment Systems For evaluating our adapted BWEs on the RepLab dataset we used a target-aware sentiment classifier introduced by Zhang et al. (2016). The network first embeds input words using pre-trained 3dev.twitter.com/streaming/overview BWEs and feeds them to a bi-directional gated neural network. Pooling is applied on the hidden representations of the left and right context of the target mention respectively. Finally, gated neurons are used to model the interaction between the target mention and its surrounding context. During training we hold our pre-trained BWEs fixed and keep the default parameters of the model. We also implement Kim (2014)’s CNN-nonstatic system, which does not use the target information in a given document (target-ignorant). The network first embeds input words using pretrained BWEs and feeds them to a convolutional layer with multiple window sizes. Max pooling is applied on top of convolution followed by a fully connected network with one hidden layer. We used this system as well because it performed comparably to the target-aware system. The reason for this is that only 1% of the used data contains more than one target and out of these rare cases only 14% have differing sentiment labels in the same sentence, which are the difficult cases of target-level sentiment analysis. We used the default parameters as described in (Kim, 2014) with the exception of using 1000 feature maps and 30 epochs, based on our initial experiments. Word embeddings are fixed during the training just as for the target-aware classifier. 4.4 Results As we previously explained we evaluate our adaptation method on the task of target-level sentiment classification using both target-aware and target-ignorant classifiers. For all experiments, our two baselines are off-the-shelf classifiers using non-adapted BWEs, i.e., BWEs trained only using Subtitles. Our goal is to show that our BWE adaptation method can improve the performance of such classifiers. We train our adapted BWEs on the concatenation of Subtitle and 22M tweets or BACKGROUND respectively. In addition, we also report results with BWEs trained only on tweets. To train the sentiment classifiers we use the English Replab training set and we evaluate on the Spanish test set. To show the performance that can be reached in a monolingual setup, we report results obtained by using annotated Spanish sentiment data instead of English (oracle). We train two oracle sentiment classifiers using (i) MWEs trained on only the Spanish part of Subtitle and (ii) 814 targetaware ignorant oracle MWE Subtitle 62.17% 63.27% BWE Subtitle 62.46% 63.50% domain adaptation Baseline 55.14% 59.05% del. simple BACKGROUND 56.79% 58.50% 22M tweets 59.44% 61.14% Subtitle+BACKGROUND 58.64% 59.34% Subtitle+22M tweets 60.99% 61.06% Table 1: Accuracy of the BWE adaptation approach on the target-level sentiment classification task. The oracle systems used Spanish sentiment training data instead of English. BWEs trained on Subtitle using posthoc mapping. The difference between the two is that the embeddings of (ii) are enriched with English words which can be beneficial for the classification of Spanish tweets because they often contain a few English words. We do not compare with word embedding adaptation methods relying on specialized resources. The point of our work is to study task-independent methods and to the best of our knowledge ours is the first such attempt. Similarly, we do not compare against machine translation based sentiment classifiers (e.g., (Zhou et al., 2016)) because for their adaptation in-domain parallel data would be needed. Table 1 gives results for both classifiers. It shows that the adaptation of Subtitle based BWEs with data from Twitter (22M tweets and BACKGROUND) clearly outperforms the Baseline in all cases. The target-aware system performed poorly with the baseline BWEs and could benefit significantly from the adaptation approach. The target-ignorant performed better with the baseline BWEs but could also benefit from the adaptation. Comparing results with the Twitter-dataset-only based BWEs, the 22M tweets performed better even though the BACKGROUND dataset is from the same topic as the RepLab train and test sets. Our conjecture is that the latter is too small to create good BWEs. In combination with Subtitles, 22M tweets also yields better results than when combined with BACKGROUND. Although the best accuracy was reached using the 22M tweetsonly based BWEs, it is only slightly better then the adapted Subtitles+22M tweets based BWEs. In §6 we show that both the semantic knowledge from Subtitles and the domain-specific information from tweets are needed to further improve results. Comparing the two classifiers we can say that they performed similarly in terms of their best results. On the other hand, the target-ignorant system had better results on average. This might seem surprising at first because the system does not use the target as information. But considering the characteristics of RepLab, i.e., that the number of tweets that contains multiple targets is negligible, using the target offers no real advantage. Although we did not focus on the impact of the seed lexicon size, we ran post-hoc mapping with different sizes during our preliminary experiments. With 1,000 and 100 word pairs in the lexicon the target-ignorant system suffered 0.5% and 4.0% drop in average of our setups respectively. To summarize the result: using adapted BWEs for the Twitter CLSC task improves the performance of off-the-shelf classifiers. 5 Medical Bilingual Lexicon Induction Another interesting downstream task for BWEs is bilingual lexicon induction. Given a list of words in a source language, the goal of BLI is to mine translations for each word in a chosen target language. The medical bilingual lexicon induction task proposed in (Heyman et al., 2017) aims to mine medical words using BWEs trained on a very small amount of English and Dutch monolingual medical data. Due to the lack of resources in this domain, good quality BWEs are hard to build using in-domain data only. We show that by enriching BWEs with general domain knowledge (in the form of general domain monolingual corpora) better results can be achieved on this medical domain task. 5.1 Experimental Setup We evaluate our improved BWEs on the dataset provided by Heyman et al. (2017). The monolingual medical data consists of English and Dutch medical articles from Wikipedia. The English (resp. Dutch) articles contain 52,336 (resp. 21,374) sentences. A total of 7,368 manually annotated word translation pairs occurring in the English (source) and Dutch (target) monolingual corpora are provided as gold data. This set is split 64%/16%/20% into trn/dev/test. 20% of the English words have multiple translations. Given an English word, the task is to find the correct Dutch translation. As monolingual general-domain data we use 815 cosine similarity classifier F1 (top) F1 (all) F1 (top) F1 (all) Baseline 13.43 9.84 37.73 36.61 Baseline BNC lexicon 20.73 21.78 Adapted medical lexicon 14.18 14.15 40.71 38.09 Adapted BNC lexicon 16.29 16.71 22.10 21.50 Table 2: We report F1 results for medical BLI with the cosine similarity and the classifier based systems. We present baseline and our proposed domain adaptation method using both general and medical lexicons. the English and Dutch data from Europarl (v7) (Koehn, 2005), a corpus of 2 million sentence pairs. Although Europarl is a parallel corpus, we use it in a monolingual way and shuffle each side of the corpus before training. By using massive cheap data we create high-quality MWEs in each language which are still domain-specific (due to inclusion of medical data). To obtain an out-ofdomain seed lexicon, we translated the English words in BNC to Dutch using Google Translate (just as we did before for the Twitter CLSC task). We then use the out-of-domain BNC and the indomain medical seed lexicons in separate experiments to create BWEs with post-hoc mapping. Note, we did not concatenate the two lexicons because (i) they have a small common subset of source words which have different target words, thus having a negative effect on the mapping and (ii) we did not want to modify the medical seed lexicon because it was taken from previous work. 5.2 BLI Systems To perform BLI we use two methods. Because BWEs represent words from different languages in a shared space, BLI can be performed via cosine similarity in this space. In other words, given a BWE representing two languages Vs and Vt, the translation of each word s ∈Vs can be induced by taking the word t ∈Vt whose representation ⃗xt in the BWE is closest to the representation ⃗xs. As the second approach we use a classifier based system proposed by Heyman et al. (2017). This neural network based system is comprised of two main modules. The first is a character-level LSTM which aims to learn orthographic similarity of word pairs. The other is the concatenation of the embeddings of the two words using embedding layers with the aim of learning the similarity among semantic representations of the words. Dense layers are applied on top of the two modules before the output soft-max layer. The classifier is trained using positive and negative word pair examples and a pre-trained word embedding model. Negative examples are randomly generated for each positive one in the training lexicon. We used default parameters as reported by Heyman et al. (2017) except for the t classification thresholds (used at prediction time). We finetuned these on dev. We note that the system works with pre-trained MWEs as well (and report these as official baseline results) but it requires BWEs for candidate generation at prediction time, thus we use BWEs for the system’s input for all experiments. In preliminary work, we had found that MWE and BWE results are similar. 5.3 Results Heyman et al. (2017)’s results are our baseline. Table 2 compares its performance with our adapted BWEs, with both cosine similarity and classification based systems. “top” F1 scores are based on the most probable word as prediction only; “all” F1 scores use all words as prediction whose probability is above the threshold. It can be seen that the cosine similarity based system using adapted BWEs clearly outperforms the nonadapted BWEs which were trained in a resource poor setup.4 Moreover, the best performance was reached using the general seed lexicon for the mapping which is due to the fact that general domain words have better quality embeddings in the MWE models, which in turn gives a better quality mapping. The classification based system performs significantly better comparing to cosine similarity by exploiting the seed lexicon better. Using adapted BWEs as input word embeddings for the system further improvements were achieved which shows the better quality of our BWEs. Simulating an even poorer setup by using a general lexicon, the 4The results for cosine similarity in (Heyman et al., 2017) are based on BWESG-based BWEs (Vuli´c and Moens, 2016) trained on a small document aligned parallel corpus without using a seed lexicon. 816 performance gain of the classifier is lower. This shows the significance of the medical seed lexicon for this system. On the other hand, adapted BWEs have better performance compared to non-adapted ones using the best translation while they have just slightly lower F1 using multiple translations. This result shows that while with adapted BWEs the system predicts better “top” translations, it has a harder time when predicting “all” due to the increased vocabulary size. To summarize: we have shown that adapted BWEs increase performance for this task and domain; and they do so independently of the taskspecific system that is used. 6 Semi-Supervised Learning In addition to the experiments that show our BWEadaptation method’s task and language independence, we investigate ways to further incorporate unlabeled data to overcome data sparsity. H¨ausser et al. (2017) introduce a semisupervised method for neural networks that makes associations from the vector representation of labeled samples to those of unlabeled ones and back. This lets the learning exploit unlabeled samples as well. While H¨ausser et al. (2017) use their model for image classification, we adapt it to CLSC of tweets and medical BLI. We show that our semisupervised model requires adapted BWEs to be effective and yields significant improvements. This innovative method is general and can be applied to any classification when unlabeled text is available. 6.1 Model H¨ausser et al. (2017)’s basic assumption is that the embeddings of labeled and unlabeled samples – i.e., the representations in the neural network on which the classification layer is applied – are similar within the same class. To achieve this, walking cycles are introduced: a cycle starts from a labeled sample, goes to an unlabeled one and ends at a labeled one. A cycle is correct if the start and end samples are in the same class. The probability of going from sample A to B is proportional to the cosine similarity of their embeddings. To maximize the number of correct cycles, two loss functions are employed: Walker loss and Visit loss. Walker loss penalizes incorrect walks and encourages a uniform probability distribution of walks to the correct class. It is defined as: Lwalker := H(T, P aba) (2) where H is the cross-entropy function, P aba ij is the probability that a cycle starts from sample i and ends at j and T is the uniform target distribution: Tij := ( 1/(#c(i)) if c(i) = c(j) 0 otherwise (3) where c(i) is the class of sample i and #c(i) is the number of occurrences of c(i) in the labeled set. Visit loss encourages cycles to visit all unlabeled samples, rather than just those which are the most similar to labeled samples. It is defined as: Lvisit := H(V, P visit) P visit j := ⟨P ab ij ⟩i (4) Vj := 1 U where H is cross-entropy, P ab ij is the probability that a cycle starts from sample i and goes to j and U is the number of unlabeled samples. The total loss during training is the sum of the walker, visit and classification (cross-entropy between predicted and gold labels) losses which is minimized using Adam (Kingma and Ba, 2015). We adapt this model (including the two losses) to sentiment classification, focusing on the targetignorant classifier, and the classifier based approach for BLI. We will call these systems semisup5. Due to the fact that we initialize the embedding layers for both classifiers with BWEs the models are able to make some correct cycles at the beginning of the training and improve them later on. We will describe the labeled and unlabeled datasets used in the subsequent sections below. We use H¨ausser et al. (2017)’s implementation of the losses, with 1.0, 0.5 and 1.0 weights for the walker, visit and classification losses, respectively, for CLSC based on preliminary experiments. We fine-tuned the weights for BLI on dev for each experiment. 5We publicly release our implementation: https:// github.com/hangyav/biadapt 817 semisup domain adaptation Baseline 58.67% (-0.38%) BACKGROUND 57.41% (-1.09%) 22M tweets 60.19% (-0.95%) Subtitle+BACKGROUND 60.31% (0.97%) Subtitle+22M tweets 63.23% (2.17%) Table 3: Accuracy on CLSC of the adapted BWE approach with the semisup (target-ignorant with additional loss functions) system comparing to the target-ignorant in brackets. 6.2 Semi-Supervised CLSC As in §4.4, we use pre-trained BWEs to initialize the classifier and use English sentiment training data as the labeled set. Furthermore, we use the Spanish sentiment training data as the unlabeled set, ignoring its annotation. This setup is very similar to real-word low-resource scenarios: unlabeled target-language tweets are easy to download while labeled English ones are available. Table 3 gives results for adapted BWEs and shows that semisup helps only when word embeddings are adapted to the Twitter domain. As mentioned earlier, semisup compares labeled and unlabeled samples based on their vector representations. By using BWEs based on only Subtitles, we lose too many embeddings of similar English and Spanish tweets. On the other hand, if we use only tweet-based BWEs we lose good quality semantic knowledge which can be learned from more standard text domains. By combining the two domains we were able to capture both sides. For Subtitle+22M tweets, we even get very close to the best oracle (BWE Subtitle) in Table 1 getting only 0.27% less accuracy – an impressive result keeping in mind that we did not use labeled Spanish data. The RepLab dataset contains tweets from 4 topics: automotive, banking, university, music. We manually analyzed similar tweets from the labeled and unlabeled sets. We found that when using semisup, English and Spanish tweets from the same topics are more similar in the embedding space than occurs without the additional losses. Topics differ in how they express sentiment – this may explain why semisup increases performance for RepLab. Adding supervision. To show how well semisup can exploit the unlabeled data we used both English and Spanish sentiment training data together to train the sentiment classifiers. Table 4 shows that by using annotated data in both languages we get clearly better results than when using only one language. Tables 3 and 4 show that for Subtitle+22M tweets based BWEs, the semisup approach achieved high improvement (2.17%) comparing to targetignorant with English training data only, while it achieved lower improvement (0.97%) with the Subtitle+BACKGROUND based BWEs. On the other hand, adding labeled Spanish data caused just a slight increase comparing to semisup with Subtitle+22M tweets based BWEs (0.59%), while in case of Subtitle+BACKGROUND we got significant additional improvement (2.61%). This means that with higher quality BWEs, unlabeled target-language data can be exploited better. It can also be seen that the target-aware system outperformed the target-ignorant system using additional labeled target-language data. The reason could be that it is a more complex network and therefore needs more data to reach high performance. The results in table 4 are impressive: our targetlevel system is strongly competitive with the official shared task results. We achieved high accuracy on the Spanish test set by using only English training data. Comparing our best system which used all training data to the official results (Amig´o et al., 2013) we would rank 2nd even though our system is not fine-tuned for the RepLab dataset. Furthermore, we also outperformed the oracles when using annotated data from both languages which shows the additional advantage of using BWEs. 6.3 Semi-Supervised BLI For BLI experiments with semisup we used word pairs from the medical seed lexicon as the labeled set (with negative word pairs generated as described in §5.2). As opposed to CLSC and the work of (H¨ausser et al., 2017), for this task we do not have an unlabeled set, and therefore we need to generate it. We developed two scenarios. For the first, BNC, we generate a general unlabeled set using English words from the BNC lexicon and generate 10 pairs out of each word by using the 5 most similar Dutch words based on the corresponding BWEs and 5 random Dutch words. For the second scenario, medical, we generate an in-domain unlabeled set by generating for each English word in the medical lexicon the 3 most similar Dutch 818 lang target-aware target-ignorant oracle MWE Subtitle Es 62.17% 63.27% BWE Subtitle Es 62.46% 63.50% domain adaptation Subtitle+BACKGROUND En 58.64% 59.34% Subtitle+BACKGROUND En+Es 64.01% 62.92% (2.61%) Subtitle+22M tweets En 60.99% 61.06% Subtitle+22M tweets En+Es 64.24% 63.82% (0.59%) Table 4: Accuracy on CLSC of both target-aware and target-ignorant systems using English or/and Spanish sentiment training data. Column lang shows the language of the used training data. Differences comparing to semisup are indicated in brackets. F1 (top) F1 (all) Baseline+BNC 35.04 (-0.66) 34.98 (-1.40) Baseline+medical 36.20 (0.50) 36.55 (0.16) Adapted+BNC 41.01 (0.30) 39.04 (0.95) Adapted+medical 41.44 (0.73) 37.51 (-0.57) Table 5: Results with the semi-supervised system for BLI. Differences comparing to previous results are indicated in brackets. Baseline results are compared to rerun experiments of Heyman et al. (2017) using BWEs instead of MWEs. words based on BWEs and for each of these we use the 5 most similar English words (ignoring the words which are in the original medical lexicon) and 5 negative words. The idea behind these methods is to automatically generate an unlabeled set that hopefully has a similar positive and negative word pair distribution to the distribution in the labeled set. Results in Table 5 show that adding semisup to the classifier further increases performance for BLI as well. For the baseline system, when using only in-domain text for creating BWEs, only the medical unlabeled set was effective, general domain word pairs could not be exploited due to the lack of general semantic knowledge in the BWE model. On the other hand, by using our domain adapted BWEs, which contain both general domain and in-domain semantical knowledge, we can exploit word pairs from both domains. Results for adapted BWEs increased in 3 out of 4 cases, where the only exception is when using multiple translations for a given source word (which may have been caused by the bigger vocabulary size). These results show that adapted BWEs are needed to exploit unlabeled data well which leads to an impressive overall 3.71 increase compared with the best result in previous work (Heyman et al., 2017), by using only unlabeled data. 7 Conclusion Bilingual word embeddings trained on general domain data yield poor results in out-of-domain tasks. We presented experiments on two different low-resource task/domain combinations. Our delightfully simple task independent method to adapt BWEs to a specific domain uses unlabeled monolingual data only. We showed that with the support of adapted BWEs the performance of offthe-shelf methods can be increased for both crosslingual Twitter sentiment classification and medical bilingual lexicon induction. Furthermore, by adapting the broadly applicable semi-supervised approach of H¨ausser et al. (2017) (which until now has only been applied in computer vision) we were able to effectively exploit unlabeled data to further improve performance. We showed that, when also using high-quality adapted BWEs, the performance of the semi-supervised systems can be significantly increased by using unlabeled data at classifier training time. In addition, CLSC results are competitive with a system that uses targetlanguage labeled data, even when we use no such target-language labeled data. Acknowledgments We would like to thank the anonymous reviewers for their valuable input. This project has received funding from the European Research Council (ERC) under the European Union’s Horizon 2020 research and innovation programme (grant agreement №640550). References Enrique Amig´o, Jorge Carrillo de Albornoz, Irina Chugur, Adolfo Corujo, Julio Gonzalo, Tamara Mart´ın, Edgar Meij, Maarten de Rijke, Damiano Spina, Enrique Amigo, Jorge Carrillo de Albornoz, Tamara Martin, and Maarten de Rijke. 2013. Overview of replab 2013: Evaluating online reputation monitoring systems. In Proc. CLEF. 819 A.R. Balamurali and Adity Joshi. 2012. Cross-lingual sentiment analysis for indian languages using linked wordnets. In Proc. COLING. Carmen Banea, Rada Mihalcea, and Janyce Wiebe. 2010. Multilingual subjectivity: Are more languages better? In Proc. COLING. Steven Bird, Ewan Klein, and Edward Loper. 2009. Natural language processing with Python. O’Reilly Media, Inc. Long Duong, Hiroshi Kanayama, Tengfei Ma, Steven Bird, and Trevor Cohn. 2016. Learning crosslingual word embeddings without bilingual corpora. In Proc. EMNLP. Manaal Faruqui, Jesse Dodge, Sujay K. Jauhar, Chris Dyer, Eduard Hovy, and Noah A. Smith. 2015. Retrofitting word vectors to semantic lexicons. In Proc. NAACL-HLT. Manaal Faruqui and Chris Dyer. 2014. Improving vector space word representations using multilingual correlation. In Proc. EACL. Miguel ´Angel Garcıa-Cumbreras, Julio VillenaRom´an, Eugenio Martınez-C´amara, Manuel Carlos D´ıaz-Galiano, Mar´ıa-Teresa Mart´ın-Valdivia, and L. Alfonso Ure˜na-L´opez. 2016. Overview of tass 2016. In Proc. TASS. Stephan Gouws, Yoshua Bengio, and Greg Corrado. 2015. Bilbowa: Fast bilingual distributed representations without word alignments. In Proc. ICML. Stephan Gouws and Anders Søgaard. 2015. Simple task-specific bilingual word embeddings. In Proc. NAACL-HLT. Lin Gui, Ruifeng Xu, Qin Lu, Jun Xu, Jian Xu, Bin Liu, and Wang Xiaolong. 2013. A mixed model for cross lingual opinion analysis. In Proc. NLPCC. Philip H¨ausser, Alexander Mordvintsev, and Daniel Cremers. 2017. Learning by Association - A versatile semi-supervised training method for neural networks. In Proc. CVPR. Karl Moritz Hermann and Phil Blunsom. 2014. Multilingual models for compositional distributed semantics. In Proc. ACL. Geert Heyman, Ivan Vuli´c, and Marie-Francine Moens. 2017. Bilingual lexicon induction by learning to combine word-level and character-level representations. In Proc. EACL. Adam Kilgarriff. 1997. Putting frequencies in the dictionary. International Journal of Lexicography. Yoon Kim. 2014. Convolutional neural networks for sentence classification. In Proc. EMNLP. Diederik Kingma and Jimmy Ba. 2015. Adam: A method for stochastic optimization. In Proc. ICLR. Philipp Koehn. 2005. Europarl: A parallel corpus for statistical machine translation. In Proc. MT Summit. Angeliki Lazaridou, Georgiana Dinu, and Marco Baroni. 2015. Hubness and pollution: Delving into cross-space mapping for zero-shot learning. In Proc. ACL. Pierre Lison and J¨org Tiedemann. 2016. Opensubtitles2016: Extracting large parallel corpora from movie and tv subtitles. In Proc. LREC. Bin Lu, Chenhao Tan, Claire Cardie, and Benjamin K. Tsou. 2011. Joint bilingual sentiment classification with unlabeled parallel corpora. In Proc. ACL. Pranava Swaroop Madhyastha and Cristina Espa˜na Bohnet. 2017. Learning bilingual projections of embeddings for vocabulary expansion in machine translation. In Proc. RepL4NLP. Rada Mihalcea, Carmen Banea, and Janyce Wiebe. 2007. Learning multilingual subjective language via cross-lingual projections. In Proc. ACL. Tomas Mikolov, Kai Chen, Greg Corrado, and Jeffrey Dean. 2013a. Efficient estimation of word representations in vector space. In Proc. ICLR. Tomas Mikolov, Quoc V. Le, and Ilya Sutskever. 2013b. Exploiting similarities among languages for machine translation. CoRR, abs/1309.4168. Preslav Nakov, Alan Ritter, Sara Rosenthal, Veselin Stoyanov, and Fabrizio Sebastiani. 2016. SemEval2016 task 4: Sentiment analysis in Twitter. In Proc. SemEval. Fabian Pedregosa, Ga¨el Varoquaux, Alexandre Gramfort, Vincent Michel, Bertrand Thirion, Olivier Grisel, Mathieu Blondel, Peter Prettenhofer, Ron Weiss, Vincent Dubourg, Jake Vanderplas, Alexandre Passos, David Cournapeau, Matthieu Brucher, Matthieu Perrot, and ´Edouard Duchesnay. 2011. Scikit-learn: Machine learning in Python. Journal of Machine Learning Research. Sascha Rothe, Sebastian Ebert, and Hinrich Sch¨utze. 2016. Ultradense Word Embeddings by Orthogonal Transformation. In Proc. NAACL-HLT. Xuewei Tang and Xiaojun Wan. 2014. Learning bilingual embedding model for cross-language sentiment classification. In Proc. WI-IAT. Ivan Vuli´c and Anna Korhonen. 2016. On the Role of Seed Lexicons in Learning Bilingual Word Embeddings. In Proc. ACL. Ivan Vuli´c and Marie-Francine Moens. 2015. Bilingual word embeddings from non-parallel documentaligned data applied to bilingual lexicon induction. In Proc. ACL. Ivan Vuli´c and Marie-Francine Moens. 2016. Bilingual distributed word representations from documentaligned comparable data. Journal of Artificial Intelligence Research. 820 Xiaojun Wan. 2009. Co-training for cross-lingual sentiment classification. In Proc. ACL. Min Xiao and Yuhong Guo. 2013. Semi-supervised representation learning for cross-lingual text classification. In Proc. EMNLP. Chao Xing, Dong Wang, Chao Liu, and Yiye Lin. 2015. Normalized word embedding and orthogonal transform for bilingual word translation. In Proc. NAACL-HLT. Meishan Zhang, Yue Zhang, and Duy-Tin Vo. 2016. Gated Neural Networks for Targeted Sentiment Analysis. In Proc. AAAI 2016. Guangyou Zhou, Tingting He, and Jun Zhao. 2014. Bridging the Language Gap: Learning Distributed Semantics for Cross-Lingual Sentiment Classification. In Proc. NLPCC. Huiwei Zhou, Long Chen, Fulin Shi, and Degen Huang. 2015. Learning bilingual sentiment word embeddings for cross-language sentiment classification. In Proc. ACL. Xinjie Zhou, Xianjun Wan, and Jianguo Xiao. 2016. Cross-lingual sentiment classification with bilingual document representation learning. In Proc. ACL.
2018
75
Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Long Papers), pages 821–832 Melbourne, Australia, July 15 - 20, 2018. c⃝2018 Association for Computational Linguistics 821 Knowledgeable Reader: Enhancing Cloze-Style Reading Comprehension with External Commonsense Knowledge Todor Mihaylov and Anette Frank Research Training Group AIPHES Department of Computational Linguistics, Heidelberg University Heidelberg, Germany {mihaylov,frank}@cl.uni-heidelberg.de Abstract We introduce a neural reading comprehension model that integrates external commonsense knowledge, encoded as a keyvalue memory, in a cloze-style setting. Instead of relying only on document-toquestion interaction or discrete features as in prior work, our model attends to relevant external knowledge and combines this knowledge with the context representation before inferring the answer. This allows the model to attract and imply knowledge from an external knowledge source that is not explicitly stated in the text, but that is relevant for inferring the answer. Our model improves results over a very strong baseline on a hard Common Nouns dataset, making it a strong competitor of much more complex models. By including knowledge explicitly, our model can also provide evidence about the background knowledge used in the RC process. 1 Introduction Reading comprehension (RC) is a language understanding task similar to question answering, where a system is expected to read a given passage of text and answer questions about it. Cloze-style reading comprehension is a task setting where the question is formed by replacing a token in a sentence of the story with a placeholder (left part of Figure 1). In contrast to many previous complex models (Weston et al., 2015; Dhingra et al., 2017; Cui et al., 2017; Munkhdalai and Yu, 2016; Sordoni et al., 2016) that perform multi-turn reading of a story and a question before inferring the correct answer, we aim to tackle the cloze-style RC task in a way that resembles how humans solve it: using, in addition, background knowledge. We develop The  prince  put  his                              away   and  prepared  for  his  long  trip.. He  mounted  his  XXXX  and  rode  away. Story Question Commonsense  knowledge Candidates horse sword hand … [IsUsedfor] horse   riding   [Causes] sword death [HasA] human hand … … [RelatedTo] mount animal Task  setup The  prince  was  on  his  white                          ,   with  a                            in  his                              . horse sword hand sword … … … Figure 1: Cloze-style reading comprehension with external commonsense knowledge. a neural model for RC that can successfully deal with tasks where most of the information to infer answers from is given in the document (story), but where additional information is needed to predict the answer, which can be retrieved from a knowledge base and added to the context representations explicitly.1 An illustration is given in Figure 1. Such knowledge may be commonsense knowledge or factual background knowledge about entities and events that is not explicitly expressed but can be found in a knowledge base such as ConceptNet (Speer et al., 2017), BabelNet (Navigli and Ponzetto, 2012), Freebase (Tanon et al., 2016) or domain-specific KBs collected with Information Extraction approaches (Fader et al., 2011; Mausam et al., 2012; Bhutani et al., 2016). Thus, we aim to define a neural model that encodes preselected knowledge in a memory, and that learns to include the available knowledge as an enrichment to the context representation. The main difference of our model to prior state-of-the-art is that instead of relying only on document-to-question interaction or discrete features while performing multiple hops over the document, our model (i) attends to relevant selected 1‘Context representation’ refers to a vector representation computed from textual information only (i.e., document (story) or question). 822 external knowledge and (ii) combines this knowledge with the context representation before inferring the answer, in a single hop. This allows the model to explicitly imply knowledge that is not stated in the text, but is relevant for inferring the answer, and that can be found in an external knowledge source. Moreover, by including knowledge explicitly, our model provides evidence and insight about the used knowledge in the RC. Our main contributions are: (i) We develop a method for integrating knowledge in a simple but effective reading comprehension model (AS Reader, Kadlec et al. (2016)) and improve its results significantly whereas other models employ features or multiple hops. (ii) We examine two sources of common knowledge: WordNet (Miller et al., 1990) and ConceptNet (Speer et al., 2017) and show that this type of knowledge is important for answering common nouns questions and also improves slightly the performance for named entities. (iii) We show that knowledge facts can be added directly to the text-only representation, enriching the neural context encoding. (iv) We demonstrate the effectiveness of the injected knowledge by case studies and data statistics in a qualitative evaluation study. 2 Reading Comprehension with Background Knowledge Sources In this work, we examine the impact of using external knowledge as supporting information for the task of cloze style reading comprehension. We build a system with two modules. The first, Knowledge Retrieval, performs fact retrieval and selects a number of facts f1, ..., fp that might be relevant for connecting story, question and candidate answers. The second, main module, the Knowledgeable Reader, is a knowledge-enhanced neural module. It uses the input of the story context tokens d1..m, the question tokens q1..n, the set of answer candidates a1..k and a set of ‘relevant’ background knowledge facts f1..p in order to select the right answer. To include external knowledge for the RC task, we encode each fact f1..p and use attention to select the most relevant among them for each token in the story and question. We expect that enriching the text with additional knowledge about the mentioned concepts will improve the prediction of correct answers in a strong single-pass system. See Figure 1 for illustration. 2.1 Knowledge Retrieval In our experiments we use knowledge from the Open Mind Common Sense (OMCS, Singh et al. (2002)) part of ConceptNet, a crowd-sourced resource of commonsense knowledge with a total of ∼630k facts. Each fact fi is represented as a triple fi=(subject, relation, object), where subject and object can be multi-word expressions and relation is a relation type. An example is: ([bow]subj, [IsUsedFor]rel, [hunt, animals]obj) We experiment with three set-ups: using (i) all facts from OMCS that pertain to ConceptNet, referred to as CN5All, (ii) using all facts from CN5All excluding some WordNet relations referred to as CN5Sel(ected) (see Section 3), and using (iii) facts from OMCS that have source set to WordNet (CN5WN3). Retrieving relevant knowledge. For each instance (D, Q, A1..10) we retrieve relevant commonsense background facts. We first retrieve facts that contain lemmas that can be looked up via tokens contained in any D(ocument), Q(uestion) or A(nswer candidates). We add a weight value for each node: 4, if it contains a lemma of a candidate token from A; 3, if it contains a lemma from the tokens of Q; and 2 if it contains a lemma from the tokens of D. The selected weights are chosen heuristically such that they model relative fact importance in different interactions as A+A > A+Q > A+D > D+Q > D+D. We weight the fact triples that contain these lemmas as nodes, by summing the weights of the subject and object arguments. Next, we sort the knowledge triples by this overall weight value. To limit the memory of our model, we run experiments with different sizes of the top number of facts (P) selected from all instance fact candidates, P ∈{50, 100, 200}. As additional retrieval limitation, we force the number of facts per answer candidate to be the same, in order to avoid a frequency bias for an answer candidate that appears more often in the knowledge source. Thus, if we select the maximum 100 facts for each task instance and we have 10 answer candidates ai=1..10, we retrieve the top 10 facts for each candidate ai that has either a subject or an object lemma for a token in ai. If the same fact contains lemmas of two candidates ai and aj (j > i), we add the fact once for ai and do not add the same fact again for aj. If several facts have the same weight, we take 823 𝜶"#$"%&'" = 𝑊*𝜶𝒄𝒕𝒙 𝒄𝒕𝒙+  𝑊.𝜶𝒄𝒕𝒙/𝒌𝒏 𝒄𝒕𝒙 +  𝑊2𝜶𝒄𝒕𝒙 𝒄𝒕𝒙/𝒌𝒏+  𝑊3𝜶𝒄𝒕𝒙/𝒌𝒏 𝒄𝒕𝒙/𝒌𝒏 sword UsedFor kill Weighted  fact   representations Weighted  facts sum horse UsedFor ride IsA animal horse horse ran fast rode his XXXXX sword and horse … … Document Question … … Context Representation Knowledge  facts  memory Context  +   Knowledge Representation 𝜶𝒄𝒕𝒙 𝒄𝒕𝒙 𝑟5678 t 𝜶"#$"%9'" … 𝑃ℎ𝑜𝑟𝑠𝑒|𝑞, 𝑑= C 𝑎E = 𝑎F + 𝑎F/H I EJK(MNO$",P) g Token-­‐wise  connection Answer  placeholder 𝑐P 678/S# 𝑐P 678 𝑟5678/S# 𝑟5678 𝑟5678/S# 𝜶𝒄𝒕𝒙/𝒌𝒏 𝒄𝒕𝒙 𝜶𝒄𝒕𝒙 𝒄𝒕𝒙/𝒌𝒏 𝜶𝒄𝒕𝒙/𝒌𝒏 𝒄𝒕𝒙/𝒌𝒏 Token-­‐wise  connection 𝑐S# Att  +  Softmax multiply 𝑓$U&F 𝑓O"' 𝑓N&F Figure 2: The Knowledgeable Reader combines plain context & enhanced (context + knowledge) repres. of D and Q and retrieved knowledge from the explicit memory with the Key-Value approach. the first in the order of the list2, i.e., the order of retrieval from the database. If one candidate has less than 10 facts, the overall fact candidates for the sample will be less than the maximum (100). 2.2 Neural Model: Extending the Attention Sum Reader with a Knowledge Memory We implement our Knowledgeable Reader (KnReader) using as a basis the Attention Sum Reader as one of the strongest core models for single-hop RC. We extend it with a knowledge fact memory that is filled with pre-selected facts. Our aim is to examine how adding commonsense knowledge to a simple yet effective model can improve the RC process and to show some evidence of that by attending on the incorporated knowledge facts. The model architecture is shown in Figure 2. Base Attention Model. The Attention-Sum Reader (Kadlec et al., 2016), our base model for RC reads the input of story tokens d1..n, the question tokens q1..m, and the set of candidates a1..10 that occur in the story text. The model calculates the attention between the question representation rq and the story token context encodings of the candidate tokens a1..10 and sums the attention scores for the candidates that appear multiple times in the story. The model selects as answer the candidate that has the highest attention score. Word Embeddings Layer. We represent input document and question tokens w by looking up their embedding representations ei = Emb(wi), where Emb is an embedding lookup function. We apply dropout (Srivastava et al., 2014) with keep 2We also experimented with re-ranking the facts with the same weight sums using tf-idf but we did not notice a difference in performance. probability p = 0.8 to the output of the embeddings lookup layer. Context Representations. To represent the document and question contexts, we first encode the tokens with a Bi-directional GRU (Gated Recurrent Unit) (Chung et al., 2014) to obtain context-encoded representations for document (cctx d1..n) and question (cctx q1..m) encoding: cctx d1..n = BiGRUctx(ed1..n) ∈Rn×2h (1) cctx q1..m = BiGRUctx(eq1..m) ∈Rm×2h (2) , where di and qi denote the ith token of a text sequence d (document) and q (question), respectively, n and m is the size of d and q and h the output hidden size (256) of a single GRU unit. BiGRU is defined in (3), with ei a word embedding vector BiGRUctx(ei, hiprev) = [−−−→ GRU(ei, −−−→ hiprev), ←−−− GRU(ei, ←−−− hiprev)] (3) , where hiprev = [−−−→ hiprev, ←−−− hiprev], and −−−→ hiprev and ←−−− hiprev are the previous hidden states of the forward and backward layers. Below we use BiGRUctx(ei) without the hidden state, for short. Question Query Representation. For the question we construct a single vector representation rctx q by retrieving the token representation at the placeholder (XXXX) index pl (cf. Figure 2): rctx q = cctx qi..m[pl] ∈R2h (4) where [pl] is an element pickup operation. Our question vector representation is different from the original AS Reader that builds the question by concatenating the last states of a forward and backward layer [−−−→ GRU(em), ←−−− GRU(e1)]. We changed the original representation as we observed some very long questions and in this way aim to prevent the context encoder from ’forgetting’ where the placeholder is. Answer Prediction: Qctx to Dctx Attention. In order to predict the correct answer to the given question, we rank the given answer candidates a1..aL according to the normalized attention sum score between the context (ctx) representation of the question placeholder rctx q and the representation of the candidate tokens in the document: P(ai|q, d) = softmax( X αij) (5) αij = Att(rctx q , cctx dj ), i ∈[1..L] (6) 824 , where j is an index pointer from the list of indices that point to the candidate ai token occurrences in the document context representation cd. Att is a dot product. Enriching Context Representations with Knowledge (Context+Knowledge). To enhance the representation of the context, we add knowledge, retrieved as a set of knowledge facts. Knowledge Encoding. For each instance in the dataset, we retrieve a number of relevant facts (cf. Section 2.1). Each retrieved fact is represented as a triple f = (wsubj 1..Lsubj, wrel 0 , wobj 1..Lobj), where wsubj 1..Lsubj and wobj 1..Lobj are a multi-word expressions representing the subject and object with sequence lengths Lsubj and Lobj, and wrel 0 is a word token corresponding to a relation.3 As a result of fact encoding, we obtain a separate knowledge memory for each instance in the data. To encode the knowledge we use a BiGRU to encode the triple argument tokens into the following context-encoded representations: fsubj last = BiGRU(Emb(wsubj 1..Lsubj), 0) (7) frel last = BiGRU(Emb(wrel 0 ), fsubj last ) (8) fobj last = BiGRU(Emb(wobj 1..Lsubj), frel last) (9) , where fsubj last , frel last, fobj last are the final hidden states of the context encoder BiGRU, that are also used as initial representations for the encoding of the next triple attribute in left-to-right order. See Supplement for comprehensive visualizations. The motivation behind this encoding is: (i) We encode the knowledge fact attributes in the same vector space as the plain tokens; (ii) we preserve the triple directionality; (iii) we use the relation type as a way of filtering the subject information to initialize the object. Querying the Knowledge Memory. To enrich the context representation of the document and question tokens with the facts collected in the knowledge memory, we select a single sum of weighted fact representations for each token using Key-Value retrieval (Miller et al., 2016). In our model the key Mk(ey) i can be either fsubj last or fobj last and the value Mv(alue) i is fobj last. For each context-encoded token cctx si (s = d, q; i the token index) we attend over all knowledge 3The 0 in wrel 0 indicates that we encode the relation as a single relation type word. Ex. /r/IsUsedFor. memory keys Mk i in the retrieved P knowledge facts. We use an attention function Att, scale the scalar attention value using softmax, multiply it with the value representation Mv i and sum the result into a single vector value representation ckn si : ckn si = X softmax(Att(cctx, Mk 1..P ))T Mv 1..P (10) Att is a dot product, but it can be replaced with another attention function. As a result of this operation, the context token representation cctx si and the corresponding retrieved knowledge ckn si are in the same vector space ∈R2h. Combine Context and Knowledge (ctx+kn). We combine the original context token representation cctx si , with the acquired knowledge representation ckn si to obtain cctx+kn si : cctx+kn si = γcctx si + (1 −γ)ckn si (11) , where γ = 0.5. We keep γ static but it can be replaced with a gating function. Answer Prediction: Qctx(+kn) to Dctx(+kn). To rank answer candidates a1..aL we use attention sum similar to Eq.5 over an attention αensemble ij that combines attentions between context (ctx) and context+knowledge (ctx+kn) representations of the question (rctx(+kn) q ) and candidate token occurrences aij in the document cctx(+kn) dj : P(ai|q, d) = softmax( X αensemble ij ) (12) αensemble ij = W1Att(rctx q , cctx dj ) +W2Att(rctx q , cctx+kn dj ) +W3Att(rctx+kn q , cctx dj ) +W4Att(rctx+kn q , cctx+kn dj ) (13) , where j is an index pointer from the list of indices that point to the candidate ai token occurrences in the document context representation cctx(+kn) d . W1..4 are scalar weights initialized with 1.0 and optimized during training.4 We propose the combination of ctx and ctx + kn attentions because our task does not provide supervision whether the knowledge is needed or not. 4An example for learned W1..4 is (2.13, 1.41, 1.49, 1.84) in setting (CBT CN, CN5Sel, Subj-Obj as k-v, 50 facts). 825 CN NE Train 120,769 / 470 108,719 / 433 Dev 2,000 / 448 2,000 / 412 Test 2,500 / 461 2,500 / 424 Vocab 53,185 53,063 Table 1: Characteristics of Children Book Test datasets. CN: Common Nouns, NE: Named Entities. Cells for Train, Dev, Test show overall numbers of examples and average story size in tokens. 3 Data and Task Description We experiment with knowledge-enhanced clozestyle reading comprehension using the Common Nouns and Named Entities partitions of the Children’s Book Test (CBT) dataset (Hill et al., 2015). In the CBT cloze-style task a system is asked to read a children story context of 20 sentences. The following 21st sentence involves a placeholder token that the system needs to predict, by choosing from a given set of 10 candidate words from the document. An example with suggested external knowledge facts is given in Figure 1. While in its Common Nouns setup, the task can be considered as a language modeling task, Hill et al. (2015) show that humans can answer the questions without the full context with an accuracy of only 64.4% and a language model alone with 57.7%. By contrast, the human performance when given the full context is at 81.6%. Since the best neural model (Munkhdalai and Yu, 2016) achieves only 72.0% on the task, we hypothesize that the task itself can benefit from external knowledge. The characteristics of the data are shown in Table 1. Other popular cloze-style datasets such as CNN/Daily Mail (Hermann et al., 2015) or WhoDidWhat (Onishi et al., 2016) are mainly focused on finding Named Entities where the benefit of adding commonsense knowledge (as we show for the NE part of CBT) would be more limited. Knowledge Source. As a source of commonsense knowledge we use the Open Mind Common Sense part of ConceptNet 5.0 that contains 630k fact triples. We refer to this entire source as CN5All. We conduct experiments with subparts of this data: CN5WN3 which is the WordNet 3 part of CN5All (213k triples) and CN5Sel, which excludes the following WordNet relations: RelatedTo, IsA, Synonym, SimilarTo, HasContext. 4 Related Work Cloze-Style Reading Comprehension. Following the original MCTest (Richardson et al., 2013) dataset multiple-choice version of cloze-style RC) recently several large-scale, automatically generated datasets for cloze-style reading comprehension gained a lot of attention, among others the ‘CNN/Daily Mail’ (Hermann et al., 2015; Onishi et al., 2016) and the Children’s Book Test (CBTest) data set (Hill et al., 2015). Early work introduced simple but good single turn models (Hermann et al., 2015; Kadlec et al., 2016; Chen et al., 2016), that read the document once with the question representation ‘in mind’ and select an answer from a given set of candidates. More complex models (Weston et al., 2015; Dhingra et al., 2017; Cui et al., 2017; Munkhdalai and Yu, 2016; Sordoni et al., 2016) perform multi-turn reading of the story context and the question, before inferring the correct answer or use features (GA Reader, Dhingra et al. (2017). Performing multiple hops and modeling a deeper relation between question and document was further developed by several models (Seo et al., 2017; Xiong et al., 2016; Wang et al., 2016, 2017; Shen et al., 2016) on another generation of RC datasets, e.g. SQuAD (Rajpurkar et al., 2016), NewsQA (Trischler et al., 2017) or TriviaQA (Joshi et al., 2017). Integrating Background Knowledge in Neural Models. Integrating background knowledge in a neural model was proposed in the neural-checklist model by Kiddon et al. (2016) for text generation of recipes. They copy words from a list of ingredients instead of inferring the word from a global vocabulary. Ahn et al. (2016) proposed a language model that copies fact attributes from a topic knowledge memory. The model predicts a fact in the knowledge memory using a gating mechanism and given this fact, the next word to be selected is copied from the fact attributes. The knowledge facts are encoded using embeddings obtained using TransE (Bordes et al., 2013). Yang et al. (2017) extended a seq2seq model with attention to external facts for dialogue and recipe generation and a co-reference resolution-aware language model. A similar model was adopted by He et al. (2017) for answer generation in dialogue. Incorporating external knowledge in a neural model has proven beneficial for several other tasks: Yang and Mitchell (2017) incorporated knowledge di826 rectly into the LSTM cell state to improve event and entity extraction. They used knowledge embeddings trained on WordNet (Miller et al., 1990) and NELL (Mitchell et al., 2015) using the BILINEAR (Yang et al., 2014) model. Work similar to ours is by Long et al. (2017), who have introduced a new task of Rare Entity Prediction. The task is to read a paragraph from WikiLinks (Singh et al., 2012) and to fill a blank field in place of a missing entity. Each missing entity is characterized with a short description derived from Freebase, and the system needs to choose one from a set of pre-selected candidates to fill the field. While the task is superficially similar to cloze-style reading comprehension, it differs considerably: first, when considering the text without the externally provided entity information, it is clearly ambiguous. In fact, the task is more similar to Entity Linking tasks in the Knowledge Base Population (KBP) tracks at TAC 2013-2017, which aim at detecting specific entities from Freebase. Our work, by contrast, examines the impact of injecting external knowledge in a reading comprehension, or NLU task, where the knowledge is drawn from a commonsense knowledge base, ConceptNet in our case. Another difference is that in their setup, the reference knowledge for the candidates is explicitly provided as a single, fixed set of knowledge facts (the entity description), encoded in a single representation. In our work, we are retrieving (typically) distinct sets of knowledge facts that might (or might not) be relevant for understanding the story and answering the question. Thus, in our setup, we crucially depend on the ability of the attention mechanism to retrieve relevant pieces of knowledge. Our aim is to examine to what extent commonsense knowledge can contribute to and improve the cloze-style RC task, that in principle is supposed to be solvable without explicitly given additional knowledge. We show that by integrating external commonsense knowledge we achieve clear improvements in reading comprehension performance over a strong baseline, and thus we can speculate that humans, when solving this RC task, are similarly using commonsense knowledge as implicitly understood background knowledge. Recent unpublished work in Weissenborn et al. (2017) is driven by similar intentions. The authors exploit knowledge from ConceptNet to improve the performance of a reading comprehension model, experimenting on the recent SQuAD (Rajpurkar et al., 2016) and TriviaQA (Joshi et al., 2017) datasets. While the source of the background knowledge is the same, the way of integrating this knowledge into the model and task is different. (i) We are using attention to select unordered fact triples using key-value retrieval and (ii) we integrate the knowledge that is considered relevant explicitly for each token in the context. The model of Weissenborn et al. (2017), by contrast, explicitly reads the acquired additional knowledge sequentially after reading the document and question, but transfers the background knowledge implicitly, by refining the word embeddings of the words in the document and the question with the words from the supporting knowledge that share the same lemma. In contrast to the implicit knowledge transfer of Weissenborn et al. (2017), our explicit attention over external knowledge facts can deliver insights about the used knowledge and how it interacts with specific context tokens (see Section 6). 5 Experiments and Results We perform quantitative analysis through experiments. We study the impact of the used knowledge and different model components that employ the external knowledge. Some of the experiments below focus only on the Common Nouns (CN) dataset, as it has been shown to be more challenging than Named Entities (NE) in prior work. 5.1 Model Parameters We experiment with different model parameters. Number of facts. We explore different sizes of knowledge memories, in terms of number of acquired facts. If not stated otherwise, we use 50 facts per example. Key-Value Selection Strategy. We use two strategies for defining key and value (Key/Value): Subj/Obj and Obj/Obj, where Subj and Obj are the subject and object attributes in the fact triples and they are selected as Key and Value for the KV memory (see Section 2.2, Querying the Knowledge Memory). If not stated otherwise, we use the Subj/Obj strategy. Answer Selection Components. If not stated otherwise, we use ensemble attention αensemble (combinations of ctx and ctx+kn) to rank the answers. We call this our Full model (see Sec. 2.2). 827 Source Dev Test CN5All 71.40 66.72 CN5WN3 (WN3) 70.70 68.48 CN5Sel(ected) 71.85 67.64 Table 2: Results with different knowledge sources, for CBT-CN (Full model, 50 facts). # facts 50 100 200 500 Dev 71.85 71.35 71.40 71.20 Test 67.64 67.44 68.12 67.24 Table 3: Results for CBT (CN) with different numbers of facts. (Full model, CN5Sel) Hyper-parameters. For our experiments we use pre-trained Glove (Pennington et al., 2014) embeddings, BiGRU with hidden size 256, batch size of 64 and learning reate of 0.001 as they were shown (Kadlec et al., 2016) to perform good on the AS Reader. 5.2 Empirical Results We perform experiments with the different model parameters described above. We report accuracy on the Dev and Test and use the results on Dev set for pruning the experiments. Knowledge Sources. We experiment with different configuration of ConceptNet facts (see Section 3). Results on the CBT CN dataset are shown in Table 2. CN5Sel works best on the Dev set but CN5WN3 works much better on Test. Further experiments use the CN5Sel setup. Number of facts. We further experiment with different numbers of facts on the Common Nouns dataset (Table 3). The best result on the Dev set is for 50 facts so we use it for further experiments. Component ablations. We ensemble the attentions from different combinations of the interaction between the question and document context (ctx) representations and context+knowledge (ctx+kn) representations in order to infer the right answer (see Section 2.2, Answer Ranking). Table 4 shows that the combination of different interactions between ctx and ctx+kn representations leads to clear improvement over the w/o knowledge setup, in particular for the Common Nouns dataset. We also performed ablations for a model with 100 facts (see Supplement). Key-Value Selection Strategy. Table 5 shows that for the NE dataset, the two strategies perform NE CN Drepr to Qrepr interaction Dev Test Dev Test Dctx, Qctx (w/o know) 75.50 70.30 68.20 64.80 Dctx+kn, Qctx+kn 76.45 69.68 70.85 66.32 Dctx, Qctx+kn 77.10 69.72 70.80 66.32 Dctx+kn, Qctx 75.65 70.88 71.20 67.96 Full model 76.80 70.24 71.85 67.64 w/o Dctx, Qctx 75.95 70.24 70.65 67.12 w/o Dctx+kn, Qctx+kn 76.20 69.80 70.75 67.00 w/o Dctx, Qctx+kn 76.55 70.52 71.75 66.32 w/o Dctx+kn, Qctx 76.05 70.84 70.80 66.80 Table 4: Results for different combinations of interactions between document (D) and question (Q) context (ctx) and context + knowledge (ctx+kn) representations. (CN5Sel, 50 facts) NE CN Key/Value Dev Test Dev Test Subj/Obj 76.65 71.52 71.85 67.64 Obj/Obj 76.70 71.28 71.25 67.48 Table 5: Results for key-value knowledge retrieval and integration. (CN5Sel, 50 facts). Subj/Obj means: we attend over the fact subject (Key) and take the weighted fact object as value (Value). NE CN Models dev test dev test Human (ctx + q) 81.6 81.6 Single interaction LSTMs (ctx + q) (Hill et al., 2015) 51.2 41.8 62.6 56.0 AS Reader 73.8 68.6 68.8 63.4 AS Reader (our impl) 75.5 70.3 68.2 64.8 KnReader (ours) 77.4 71.4 71.8 67.6 Multiple interactions MemNNs (Weston et al., 2015) 70.4 66.6 64.2 63.0 EpiReader (Trischler et al., 2016) 74.9 69.0 71.5 67.4 GA Reader (Dhingra et al., 2017) 77.2 71.4 71.6 68.0 IAA Reader (Sordoni et al., 2016) 75.3 69.7 72.1 69.2 AoA Reader (Cui et al., 2017) 75.2 68.6 72.2 69.4 GA Reader (+feat) 77.8 72.0 74.4 70.7 NSE (Munkhdalai and Yu, 2016) 77.0 71.4 74.3 71.9 Table 6: Comparison of KnReader to existing endto-end neural models on the benchmark datasets. equally well on the Dev set, whereas the Subj/Obj strategy works slightly better on the Test set. For Common Nouns, Subj/Obj is better. Comparison to Previous Work. Table 6 compares our model (Knowledgeable Reader) to previous work on the CBT datasets. We show the results of our model with the settings that performed best on the Dev sets of the two datasets NE and CN: for NE, (Dctx+kn, Qctx) with 100 facts; for CN the Full model with 50 facts, both with CN5Sel. Note that our work focuses on the impact of external knowledge and employs a single inter828 action (single-hop) between the document context and the question so we primarily compare to and aim at improving over similar models. KnReader clearly outperforms prior single-hop models on both datasets. While we do not improve over the state of the art, our model stands well among other models that perform multiple hops. In the Supplement we also give comparison to ensemble models and some models that use re-ranking strategies. 6 Discussion and Analysis 6.1 Analysis of the empirical results. Our experiments examined key parameters of the KnReader. As expected, injection of background knowledge yields only small improvements over the baseline model for Named Entities. However, on this dataset our single-hop model is competitive to most multi-hop neural architectures. The integration of knowledge clearly helps for the Common Nouns task. The impact of knowledge sources (Table 2) is different on the Dev and Test sets which indicates that either the model or the data subsets are sensitive to different knowledge types and retrieved knowledge. Table 5 shows that attending over the Subj of the knowledge triple is slightly better than Obj. This shows that using a Key-Value memory is valuable. A reason for lower performance of Obj/Obj is that the model picks facts that are similar to the candidate tokens, not adding much new information. From the empirical results we see that training and evaluation with less facts is slightly better. We hypothesize that this is related to the lack of supervision on the retrieved and attended knowledge. 6.2 Interpreting Component Importance Figure 3 shows the impact on prediction accuracy of individual components of the Full model, including the interaction between D and Q with ctx or ctx + kn (w/o ctx-only). The values for each component are obtained from the attention weights, without retraining the model. The difference between blue (left) and orange (right) values indicates how much the module contributes to the model. Interestingly, the ranking of the contribution (Dctx, Qctx+kn > Dctx+kn, Qctx > Dctx+kn, Qctx+kn) corresponds to the component importance ablation on the Dev set, lines 5-8, Table 4. 0 50 100 150 200 250 Subj/Obj,  50  facts incorrect  -­‐>  correct 0 50 100 150 200 250 Obj/Obj,  50  facts correct  -­‐>  incorrect Figure 3: # of items with reversed prediction (±correct) for each combination of (ctx+kn, ctx) for Q and D. We report the number of wrong →correct (blue) and correct →wrong (orange) changes when switching from score w/o knowledge to score w/ knowledge. The best model type is Ensemble. (Full model w/o Dctx, Qctx). 6.3 Qualitative Data Investigation We will use the attention values of the interactions between Dctx(+kn) and Qctx(+kn) and attentions to facts from each candidate token and the question placeholder to interpret how knowledge is employed to make a prediction for a single example. Method: Interpreting Model Components. We manually inspect examples from the evaluation sets where KnReader improves prediction (blue (left) category, Fig. 3) or makes the prediction worse (orange (right) category, Fig. 3). Figure 4 shows the question with placeholder, followed by answer candidates and their associated attention weights as assigned by the model w/o knowledge. The matrix shows selected facts and their assigned weights for the question and the candidate tokens. Finally, we show the attention weights determined by the knowledge-enhanced D to Q interactions. The attention to the correct answer (head) is low when the model considers the text alone (w/o knowledge). When adding retrieved knowledge to the Q only (row ctx, ctx+kn) and to both Q and D (row ctx + kn, ctx + kn) the score improves, while when adding knowledge to D alone (row ctx + kn, ctx) the score remains ambiguous. The combined score Ensemble (see Eq. 13) then takes the final decision for the answer. In this example, the question can be answered without the story. The model tries to find knowledge that is related to eyes. The fact eyes /r/PartOf head is not contained in the retrieved knowledge but in829 bird head legs sides wood Dctx, Qctx (w/o know) 0.00 0.26 0.40 0.33 0.02 Q: UNK_59 did not say anything ; but when the other two had passed on she bent down to the bird , brushed aside the feathers from his xxxxx , and kissed his closed eyes gently . 0.0 0.4 Q bird head legs sides wood bird /r/PartOf bird beak /r/PartOf bird wing /r/PartOf bird a bird /r/UsedFor testing air in a mine bird /r/CapableOf head south a bird /r/CapableOf sing to other birds head /r/PartOf animal basilar artery /r/PartOf head ear /r/PartOf head porch /r/PartOf house a bird /r/HasA two legs wood /r/Antonym fire spite /r/DistinctFrom like wood /r/AtLocation a fire wood /r/DistinctFrom carpet 0.2 0.4 0.6 0.8 bird head legs sides wood Ensemble Dctx + kn, Qctx Dctx + kn, Qctx + kn Dctx, Qctx + kn 0.02 0.70 0.12 0.05 0.02 0.01 0.36 0.37 0.23 0.03 0.07 0.68 0.19 0.02 0.01 0.01 0.83 0.13 0.01 0.00 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 Figure 4: Interpreting the components of KnReader. Adding knowledge to Q and D increases the score for the correct answer. Results for top 5 candidates are shown. (Full model, CN data, CN5Sel, Subj/Obj, 50 facts) stead the model selects the fact ear /r/PartOf head which receives the highest attention from Q. The weighted Obj representation (head) is added to the question with the highest weight, together with animal and bird from the next highly weighted facts This results in a high score for the Qctx to Dctx+kn interaction with candidate head. See Supplement for more details. Using the method described above, we analyze several example cases (presented in Supplement) that highlight different aspects of our model. Here we summarize our observations. (i.) Answer prediction from Q or Q+D. In both human and machine RC, questions can be answered based on the question alone (Figure 4) or jointly with the story context (Case 2, Suppl.). We show that empirically, enriching the question with knowledge is crucial for the first type, while enrichment of Q and D is required for the second. (ii.) Overcoming frequency bias.. We show that when appropriate knowledge is available and selected, the model is able to correct a frequency bias towards an incorrect answer (Cases 1 and 3). (iii.) Providing appropriate knowledge. We observe a lack of knowledge regarding events (e.g. take off vs. put on clothes, Case 2; climb up, Case 5). Nevertheless relevant knowledge from CN5 can help predicting infrequent candidates (Case 2). (iv.) Knowledge, Q and D encoding. The context encoding of facts allows the model to detect knowledge that is semantically related, but not surface near to phrases in Q and D (Case 2). The model finds facts to non-trivial paraphrases (e.g. undressed–naked, Case 2). 7 Conclusion and Future Work We propose a neural cloze-style reading comprehension model that incorporates external commonsense knowledge, building on a single-turn neural model. Incorporating external knowledge improves its results with a relative error rate reduction of 9% on Common Nouns, thus the model is able to compete with more complex RC models. We show that the types of knowledge contained in ConceptNet are useful. We provide quantitative and qualitative evidence of the effectiveness of our model, that learns how to select relevant knowledge to improve RC. The attractiveness of our model lies in its transparency and flexibility: due to the attention mechanism, we can trace and analyze the facts considered in answering specific questions. This opens up for deeper investigation and future improvement of RC models in a targeted way, allowing us to investigate what knowledge sources are required for different data sets and domains. Since our model directly integrates background knowledge with the document and questioncontext representations, it can be adapted to very different task settings where we have a pair of two arguments (i.e. entailment, question answering, etc.) In future work, we will investigate even tighter integration of the attended knowledge and stronger reasoning methods. Acknowledgments This work has been supported by the German Research Foundation as part of the Research Training Group Adaptive Preparation of Information from Heterogeneous Sources (AIPHES) under grant No. GRK 1994/1. We thank the reviewers for their helpful questions and comments. 830 References Sungjin Ahn, Heeyoul Choi, Tanel P¨arnamaa, and Yoshua Bengio. 2016. A neural knowledge language model. In CoRR, volume abs/1608.00318. Nikita Bhutani, H V Jagadish, and Dragomir Radev. 2016. Nested propositions in open information extraction. In Proceedings of the 2016 Conference on Empirical Methods in Natural Language Processing, pages 55–64, Austin, Texas. Association for Computational Linguistics. Antoine Bordes, Nicolas Usunier, Alberto GarciaDuran, Jason Weston, and Oksana Yakhnenko. 2013. Translating embeddings for modeling multirelational data. In Advances in Neural Information Processing Systems 26, pages 2787–2795. Curran Associates, Inc. Danqi Chen, Jason Bolton, and Christopher D. Manning. 2016. A thorough examination of the cnn/daily mail reading comprehension task. In Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 2358–2367, Berlin, Germany. Association for Computational Linguistics. Junyoung Chung, C¸ alar G¨ulc¸ehre, Kyunghyun Cho, and Yoshua Bengio. 2014. Empirical evaluation of gated recurrent neural networks on sequence modeling. arXiv e-prints, abs/1412.3555. Presented at the Deep Learning workshop at NIPS2014. Yiming Cui, Zhipeng Chen, Si Wei, Shijin Wang, Ting Liu, and Guoping Hu. 2017. Attention-overattention neural networks for reading comprehension. In Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 593–602. Association for Computational Linguistics. Bhuwan Dhingra, Hanxiao Liu, Zhilin Yang, William Cohen, and Ruslan Salakhutdinov. 2017. Gatedattention readers for text comprehension. In Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 1832–1846. Association for Computational Linguistics. Anthony Fader, Stephen Soderland, and Oren Etzioni. 2011. Identifying relations for open information extraction. In Proceedings of the 2011 Conference on Empirical Methods in Natural Language Processing, pages 1535–1545, Edinburgh, Scotland, UK. Association for Computational Linguistics. Shizhu He, Cao Liu, Kang Liu, and Jun Zhao. 2017. Generating natural answers by incorporating copying and retrieving mechanisms in sequence-tosequence learning. In Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 199– 208. Association for Computational Linguistics. Karl Moritz Hermann, Tomas Kocisky, Edward Grefenstette, Lasse Espeholt, Will Kay, Mustafa Suleyman, and Phil Blunsom. 2015. Teaching machines to read and comprehend. In C. Cortes, N. D. Lawrence, D. D. Lee, M. Sugiyama, and R. Garnett, editors, Advances in Neural Information Processing Systems 28, pages 1693–1701. Curran Associates, Inc. Felix Hill, Antoine Bordes, Sumit Chopra, and Jason Weston. 2015. The goldilocks principle: Reading children’s books with explicit memory representations. volume abs/1511.02301. Mandar Joshi, Eunsol Choi, Daniel Weld, and Luke Zettlemoyer. 2017. Triviaqa: A large scale distantly supervised challenge dataset for reading comprehension. In Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 1601–1611, Vancouver, Canada. Association for Computational Linguistics (ACL) 2017. Rudolf Kadlec, Martin Schmid, Ondˇrej Bajgar, and Jan Kleindienst. 2016. Text understanding with the attention sum reader network. In Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 908–918. Association for Computational Linguistics. Chlo´e Kiddon, Luke Zettlemoyer, and Yejin Choi. 2016. Globally coherent text generation with neural checklist models. In Proceedings of the 2016 Conference on Empirical Methods in Natural Language Processing, pages 329–339, Austin, Texas. Association for Computational Linguistics. Teng Long, Emmanuel Bengio, Ryan Lowe, Jackie Chi Kit Cheung, and Doina Precup. 2017. World knowledge for reading comprehension: Rare entity prediction with hierarchical lstms using external descriptions. In Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing, pages 825–834, Copenhagen, Denmark. Association for Computational Linguistics. Mausam, Michael Schmitz, Stephen Soderland, Robert Bart, and Oren Etzioni. 2012. Open language learning for information extraction. In Proceedings of the 2012 Joint Conference on Empirical Methods in Natural Language Processing and Computational Natural Language Learning, pages 523–534, Jeju Island, Korea. Association for Computational Linguistics. Alexander Miller, Adam Fisch, Jesse Dodge, AmirHossein Karimi, Antoine Bordes, and Jason Weston. 2016. Key-value memory networks for directly reading documents. In Proceedings of the 2016 Conference on Empirical Methods in Natural Language Processing, pages 1400–1409, Austin, Texas. Association for Computational Linguistics. George A. Miller, Richard Beckwith, Christiane Fellbaum, Derek Gross, and Katherine J. Miller. 831 1990. Introduction to wordnet: An on-line lexical database*. In International Journal of Lexicography, volume 3, pages 235–244. T. Mitchell, W. Cohen, E. Hruschka, P. Talukdar, J. Betteridge, A. Carlson, B. Dalvi, M. Gardner, B. Kisiel, J. Krishnamurthy, N. Lao, K. Mazaitis, T. Mohamed, N. Nakashole, E. Platanios, A. Ritter, M. Samadi, B. Settles, R. Wang, D. Wijaya, A. Gupta, X. Chen, A. Saparov, M. Greaves, and J. Welling. 2015. Never-ending learning. In Proceedings of the Twenty-Ninth AAAI Conference on Artificial Intelligence (AAAI-15). Tsendsuren Munkhdalai and Hong Yu. 2016. Reasoning with Memory Augmented Neural Networks for Language Comprehension. In International Conference on Learning Representations (ICLR) 2017. Roberto Navigli and Simone Paolo Ponzetto. 2012. BabelNet: The automatic construction, evaluation and application of a wide-coverage multilingual semantic network. Artificial Intelligence, 193:217– 250. Takeshi Onishi, Hai Wang, Mohit Bansal, Kevin Gimpel, and David McAllester. 2016. Who did what: A large-scale person-centered cloze dataset. In Proceedings of the 2016 Conference on Empirical Methods in Natural Language Processing, pages 2230– 2235, Austin, Texas. Association for Computational Linguistics. Jeffrey Pennington, Richard Socher, and Christopher Manning. 2014. Glove: Global vectors for word representation. In Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing (EMNLP), pages 1532–1543. Association for Computational Linguistics. Pranav Rajpurkar, Jian Zhang, Konstantin Lopyrev, and Percy Liang. 2016. Squad: 100,000+ questions for machine comprehension of text. In Proceedings of the 2016 Conference on Empirical Methods in Natural Language Processing, pages 2383–2392, Austin, Texas. Association for Computational Linguistics. Matthew Richardson, Christopher J.C. Burges, and Erin Renshaw. 2013. MCTest: A challenge dataset for the open-domain machine comprehension of text. In Proceedings of the 2013 Conference on Empirical Methods in Natural Language Processing, pages 193–203, Seattle, Washington, USA. Association for Computational Linguistics. Minjoon Seo, Aniruddha Kembhavi, Ali Farhadi, and Hananneh Hajishirzi. 2017. Bi-Directional Attention Flow for Machine Comprehension. In Proceedings of International Conference of Learning Representations 2017, pages 1–12. Yelong Shen, Po-Sen Huang, Jianfeng Gao, and Weizhu Chen. 2016. Reasonet: Learning to stop reading in machine comprehension. In Proceedings of the Workshop on Cognitive Computation: Integrating neural and symbolic approaches 2016 colocated with the 30th Annual Conference on Neural Information Processing Systems (NIPS 2016), Barcelona, Spain, December 9, 2016. Parmjit Singh, T Lin, E.T. Mueller, G Lim, T Perkins, and W.L. Zhu. 2002. Open mind common sense: Knowledge acquisition from the general public. In Lecture Notes in Computer Science, volume 2519, pages 1223–1237. Sameer Singh, Amarnag Subramanya, Fernando Pereira, and Andrew McCallum. 2012. Wikilinks: A large-scale cross-document coreference corpus labeled via links to Wikipedia. Technical Report UMCS-2012-015. Alessandro Sordoni, Phillip Bachman, and Yoshua Bengio. 2016. Iterative alternating neural attention for machine reading. abs/1606.02245. Robert Speer, Joshua Chin, and Catherine Havasi. 2017. ConceptNet 5.5: An Open Multilingual Graph of General Knowledge. In AAAI. Nitish Srivastava, Geoffrey Hinton, Alex Krizhevsky, Ilya Sutskever, and Ruslan Salakhutdinov. 2014. Dropout: A simple way to prevent neural networks from overfitting. volume 15, pages 1929–1958. Thomas Pellissier Tanon, Denny Vrande, San Francisco, Sebastian Schaffert, and Thomas Steiner. 2016. From Freebase to Wikidata : The Great Migration. In Proceedings of the 25th International Conference on World Wide Web, pages 1419–1428. Adam Trischler, Tong Wang, Xingdi Yuan, Justin Harris, Alessandro Sordoni, Philip Bachman, and Kaheer Suleman. 2017. Newsqa: A machine comprehension dataset. In Proceedings of the 2nd Workshop on Representation Learning for NLP, pages 191–200, Vancouver, Canada. Association for Computational Linguistics. Adam Trischler, Zheng Ye, Xingdi Yuan, Philip Bachman, Alessandro Sordoni, and Kaheer Suleman. 2016. Natural language comprehension with the epireader. In Proceedings of the 2016 Conference on Empirical Methods in Natural Language Processing, pages 128–137. Association for Computational Linguistics. Wenhui Wang, Nan Yang, Furu Wei, Baobao Chang, and Ming Zhou. 2017. Gated self-matching networks for reading comprehension and question answering. In Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 189–198. Association for Computational Linguistics. Zhiguo Wang, Haitao Mi, Wael Hamza, and Radu Florian. 2016. Multi-perspective context matching for machine comprehension. CoRR, abs/1612.04211. 832 Dirk Weissenborn, Tomas Kocisky, and Chris Dyer. 2017. Dynamic integration of background knowledge in neural NLU systems. CoRR, abs/1706.02596. Jason Weston, Sumit Chopra, and Antoine Bordes. 2015. Memory networks. In International Conference on Learning Representations (ICLR), 2015. Caiming Xiong, Victor Zhong, and Richard Socher. 2016. Dynamic coattention networks for question answering. In International Conference on Learning Representations (ICLR), 2017, volume abs/1611.01604. Bishan Yang and Tom Mitchell. 2017. Leveraging knowledge bases in lstms for improving machine reading. In Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 1436–1446. Association for Computational Linguistics. Bishan Yang, Wen-tau Yih, Xiaodong He, Jianfeng Gao, and Li Deng. 2014. Embedding Entities and Relations for Learning and Inference in Knowledge Bases. In International Conference on Learning Representations (ICLR), 2015. Zichao Yang, Phil Blunsom, Chris Dyer, and Wang Ling. 2017. Reference-aware language models. In Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing, pages 1851–1860. Association for Computational Linguistics.
2018
76
Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Long Papers), pages 833–844 Melbourne, Australia, July 15 - 20, 2018. c⃝2018 Association for Computational Linguistics 833 Multi-Relational Question Answering from Narratives: Machine Reading and Reasoning in Simulated Worlds Igor Labutov Bishan Yang Machine Learning Dept. Carnegie Mellon University Pittsburgh, PA 15213 [email protected] [email protected] Anusha Prakash Language Technologies Inst. Carnegie Mellon University Pittsburgh, PA 15213 [email protected] Amos Azaria Computer Science Dept. Ariel University Israel [email protected] Abstract Question Answering (QA), as a research field, has primarily focused on either knowledge bases (KBs) or free text as a source of knowledge. These two sources have historically shaped the kinds of questions that are asked over these sources, and the methods developed to answer them. In this work, we look towards a practical use-case of QA over user-instructed knowledge that uniquely combines elements of both structured QA over knowledge bases, and unstructured QA over narrative, introducing the task of multirelational QA over personal narrative. As a first step towards this goal, we make three key contributions: (i) we generate and release TEXTWORLDSQA, a set of five diverse datasets, where each dataset contains dynamic narrative that describes entities and relations in a simulated world, paired with variably compositional questions over that knowledge, (ii) we perform a thorough evaluation and analysis of several state-of-the-art QA models and their variants at this task, and (iii) we release a lightweight Python-based framework we call TEXTWORLDS for easily generating arbitrary additional worlds and narrative, with the goal of allowing the community to create and share a growing collection of diverse worlds as a test-bed for this task. 1 Introduction Personal devices that interact with users via natural language conversation are becoming ubiquitous (e.g., Siri, Alexa), however, very little of that conversation today allows the user to teach, and then query, new knowledge. Most of the focus in She and Amy are both TAs for this course Which phd students are advised by the department head? John is now the department head User queries taught knowledge User teaches new knowledge Figure 1: Illustration of our task: relational question answering from dynamic knowledge expressed via personal narrative these personal devices has been on Question Answering (QA) over general world-knowledge (e.g., “who was the president in 1980” or “how many ounces are in a cup”). These devices open a new and exciting possibility of enabling end-users to teach machines in natural language, e.g., by expressing the state of their personal world to its virtual assistant (e.g., via narrative about people and events in that user’s life) and enabling the user to ask questions over that personal knowledge (e.g., “which engineers in the QC team were involved in the last meeting with the director?”). This type of questions highlight a unique blend of two conventional streams of research in Question Answering (QA) – QA over structured sources such as knowledge bases (KBs), and QA over unstructured sources such as free text. This blend is a natural consequence of our problem setting: (i) users may choose to express rich relational knowledge about their world, in turn enabling them to pose complex composi834 1. There is an associate professor named Andy 2. He returned from a sabbatical 3. This professor currently has funding 4. There is a masters level course called G301 5. That course is taught by him 6. That class is part of the mechanical engineering department 7. Roslyn is a student in this course 8. U203 is a undergraduate level course 9. Peggy and that student are TAs for this course … What students are advised by a professor with funding? [Albertha, Roslyn, Peggy, Lucy, Racquel] What assistant professors advise students who passed their thesis proposal? [Andy] Which courses have masters student TAs? [G301, U101 ] Who are the professors working on unsupervised machine learning? [Andy, Hanna] 1. There is a new important mobile project 2. That project is in the implementation stage 3. Hiram is a tester on mobile project 4. Mobile project has moved to the deployment stage 5. Andrew created a new issue for mobile project: fails with apache stack 6. Andrew is no longer assigned to that project 7. That developer resolved the changelog needs to be added issue … Are there any developers assigned to projects in the evaluation stage? [Tawnya, Charlott, Hiram] Who is the null pointer exception during parsing issue assigned to? Hiram Are there any issues that are resolved for experimental projects? [saving data throws exception, wrong pos tag on consecutive words] Academic Department World Software Engineering World Figure 2: Illustrative snippets from two sample worlds. We aim to generate natural-sounding first-person narratives from five diverse worlds, covering a range of different events, entities and relations. tional queries (e.g., “all CS undergrads who took my class last semester”), while at the same time (ii) personal knowledge generally evolves through time and has an open and growing set of relations, making natural language the only practical interface for creating and maintaining that knowledge by non-expert users. In short, the task that we address in this work is: multi-relational question answering from dynamic knowledge expressed via narrative. Although we hypothesize that questionanswering over personal knowledge of this sort is ubiquitous (e.g., between a professor and their administrative assistant, or even if just in the user’s head), such interactions are rarely recorded, presenting a significant practical challenge to collecting a sufficiently large real-world dataset of this type. At the same time, we hypothesize that the technical challenges involved in developing models for relational question answering from narrative would not be fundamentally impacted if addressed via sufficiently rich, but controlled simulated narratives. Such simulations also offer the advantage of enabling us to directly experiment with stories and queries of different complexity, potentially offering additional insight into the fundamental challenges of this task. While our problem setting blends the problems of relational question answering over knowledge bases and question answering over text, our hypothesis is that end-to-end QA models may learn to answer such multisentential relational queries, without relying on an intermediate knowledge base representation. In this work, we conduct an extensive evaluation of a set of state-of-the-art end-to-end QA models on our task and analyze their results. 2 Related Work Question answering has been mainly studied in two different settings: KB-based and text-based. KB-based QA mostly focuses on parsing questions to logical forms (Zelle and Mooney, 1996; Zettlemoyer and Collins, 2012; Berant et al., 2013; Kwiatkowski et al., 2013; Yih et al., 2015) in order to better retrieve answer candidates from a knowledge base. Text-based QA aims to directly answer questions from the input text. This includes works on early information retrieval-based methods (Banko et al., 2002; Ahn et al., 2004) and methods that build on extracted structured representations from both the question and the input text (Sachan et al., 2015; Sachan and Xing, 2016; Khot et al., 2017; Khashabi et al., 2018b). Although these structured presentations make reasoning more effective, they rely on sophisticated 835 NLP pipelines and suffer from error propagation. More recently, end-to-end neural architectures have been successfully applied to textbased QA, including Memory-augmented neural networks (Sukhbaatar et al., 2015; Miller et al., 2016; Kumar et al., 2016) and attention-based neural networks (Hermann et al., 2015; Chen et al., 2016; Kadlec et al., 2016; Dhingra et al., 2017; Xiong et al., 2017; Seo et al., 2017; Chen et al., 2017). In this work, we focus on QA over text (where the text is generated from a supporting KB) and evaluate several state-of-the-art memoryaugmented and attention-based neural architectures on our QA task. In addition, we consider a sequence-to-sequence model baseline (Bahdanau et al., 2015), which has been widely used in dialog (Vinyals and Le, 2015; Ghazvininejad et al., 2017) and recently been applied to generating answer values from Wikidata (Hewlett et al., 2016). There are numerous datasets available for evaluating the capabilities of QA systems. For example, MCTest (Richardson et al., 2013) contains comprehension questions for fictional stories. Allen AI Science Challenge (Clark, 2015) contains science questions that can be answered with knowledge from text books. RACE (Lai et al., 2017) is an English exam dataset for middle and high school Chinese students. MULTIRC (Khashabi et al., 2018a) is a dataset that focuses on evaluating multi-sentence reasoning skills. These datasets all require humans to carefully design multiplechoice questions and answers, so that certain aspects of the comprehension and reasoning capabilities are properly evaluated. As a result, it is difficult to collect them at scale. Furthermore, as the knowledge required for answering each question is not clearly specified in these datasets, it can be hard to identify the limitations of QA systems and propose improvements. Weston et al. (2015) proposes to use synthetic QA tasks (the BABI dataset) to better understand the limitations of QA systems. BABI builds on a simulated physical world similar to interactive fiction (Montfort, 2005) with simple objects and relations and includes 20 different reasoning tasks. Various types of end-to-end neural networks (Sukhbaatar et al., 2015; Lee et al., 2015; Peng et al., 2015) have demonstrated promising accuracies on this dataset. However, the performance can hardly translate to real-world QA datasets, as BABI uses a small vocabulary (150 words) and short sentences with limited language variations (e.g., nesting sentences, coreference). A more sophisticated QA dataset with a supporting KB is WIKIMOVIES (Miller et al., 2016), which contains 100k questions about movies, each of them is answerable by using either a KB or a Wikipedia article. However, WIKIMOVIES is highly domain-specific, and similar to BABI, the questions are designed to be in simple forms with little compositionality and hence limit the difficulty level of the tasks. Our dataset differs in the above datasets in that (i) it contains five different realistic domains permitting cross-domain evaluation to test the ability of models to generalize beyond a fixed set of KB relations, (ii) it exhibits rich referring expressions and linguistic variations (vocabulary much larger than the BABI dataset), (iii) questions in our dataset are designed to be deeply compositional and can cover multiple relations mentioned across multiple sentences. Other large-scale QA datasets include Clozestyle datasets such as CNN/Daily Mail (Hermann et al., 2015), Children’s Book Test (Hill et al., 2015), and Who Did What (Onishi et al., 2016); datasets with answers being spans in the document, such as SQuAD (Rajpurkar et al., 2016), NewsQA (Trischler et al., 2016), and TriviaQA (Joshi et al., 2017); and datasets with human generated answers, for instance, MS MARCO (Nguyen et al., 2016) and SearchQA (Dunn et al., 2017). One common drawback of these datasets is the difficulty in accessing a system’s capability of integrating information across a document context. Koˇcisk`y et al. (2017) recently emphasized this issue and proposed NarrativeQA, a dataset of fictional stories with questions that reflect the complexity of narratives: characters, events, and evolving relations. Our dataset contains similar narrative elements, but it is created with a supporting KB and hence it is easier to analyze and interpret results in a controlled setting. 3 TEXTWORLDS: Simulated Worlds for Multi-Relational QA from Narratives In this work, we synthesize narratives in five diverse worlds, each containing a thousand narratives and where each narrative describes the evolution of a simulated user’s world from a firstperson perspective. In each narrative, the simu836 lated user may introduce new knowledge, update existing knowledge or express a state change (e.g., “Homework 3 is now due on Friday” or “Samantha passed her thesis defense”). Each narrative is interleaved with questions about the current state of the world, and questions range in complexity depending on the amount of knowledge that needs to be integrated to answer them. This allows us to benchmark a range of QA models at their ability to answer questions that require different extents of relational reasoning to be answered. The set of worlds that we simulate as part of this work are as follows: 1. MEETING WORLD: This world describes situations related to professional meetings, e.g., meetings being set/cancelled, people attending meetings, topics of meetings. 2. HOMEWORK WORLD: This world describes situations from the first-person perspective of a student, e.g., courses taken, assignments in different courses, deadlines of assignments. 3. SOFTWARE ENGINEERING WORLD: This world describes situations from the first-person perspective of a software development manager, e.g., task assignment to different project team members, stages of software development, bug tickets. 4. ACADEMIC DEPARTMENT WORLD: This world describes situations from the first-person perspective of a professor, e.g., teaching assignments, faculty going/returning from sabbaticals, students from different departments taking/dropping courses. 5. SHOPPING WORLD: This world describes situations about a person shopping for various occasions, e.g., adding items to a shopping list, purchasing items at different stores, noting where items are on sale. 3.1 Narrative Each world is represented by a set of entities E and a set of unary, binary or ternary relations R. Formally, a single step in one simulation of a world involves a combination of instantiating new entities and defining new (or mutating existing) relations between entities. Practically, we implement each world as a collection of classes and Statistics Value # of total stories 5,000 # of total questions 1,207,022 Avg. # of entity mentions (per story) 217.4 Avg. # of correct answers (per question) 8.7 Avg. # of statements in stories 100 Avg. # of tok. in stories 837.5 Avg. # of tok. in questions 8.9 Avg. # of tok. in answers 1.5 Vocabulary size (tok.) 1,994 Vocabulary size (entity) 10,793 Table 1: TEXTWORLDSQA dataset statistics methods, with each step of the simulation creating or mutating class instances by sampling entities and methods on those entities. By design, these classes and methods are easy to extend, to either enrich existing worlds or create new ones. Each simulation step is then expressed as a natural language statement, which is added to the narrative. In the process of generating a natural language expression, we employ a rich mechanism for generating anaphora, such as “meeting with John about the performance review” and “meeting that I last added”, in addition to simple pronoun references. This allows us to generate more natural and flowing narratives. These references are generated and composed automatically by the underlying TEXTWORLDS framework, significantly reducing the effort needed to build new worlds. Furthermore, all generated stories also provide additional annotation that maps all entities to underlying gold-standard KB ids, allowing to perform experiments that provide models with different degrees of access to the “simulation oracle”. We generate 1,000 narratives within each world, where each narrative consists of 100 sentences, plus up to 300 questions interleaved randomly within the narrative. See Figure 1 for two example narratives. Each story in a given world samples its entities from a large general pool of entity names collected from the web (e.g., people names, university names). Although some entities do overlap between stories, each story in a given world contains a unique flow of events and entities involved in those events. See Table 1 for the data statistics. 3.2 Questions Formally, questions are queries over the knowledge-base in the state defined up to the point when the question is asked in the narrative. In the narrative, the questions are expressed 837 Dataset Questions Single Entity/Relation Multiple entities Single relation Two relations Three relations MEETING 57,590 (41.16%) 46,373 (33.14%) 30,391 (21.72%) 5,569 (3.98%) HOMEWORK 45,523 (24.10%) 17,964 (9.51%) 93,669 (49.59%) 31,743 (16.80%) SOFTWARE 47,565 (20.59%) 51,302 (22.20%) 66,026 (28.58%) 66,150 (28.63%) ACADEMIC 46,965 (24.81%) 54,581 (28.83%) 57,814 (30.53%) 29,982 (15.83%) SHOPPING 111,522 (26.25%) 119,890 (28.22%) 107,418 (25.29%) 85,982 (20.24%) All 309,165 (26.33%) 290,110 (24.71%) 355,318 (30.27%) 219,426 (18.69%) Table 2: Dataset statistics by question type. in natural language, employing the same anaphora mechanism used in generating the narrative (e.g., “who is attending the last meeting I added?”). We categorize generated questions into four types, reflecting the number and types of facts required to answer them; questions that require more facts to answer are typically more compositional in nature. We categorize each question in our dataset into one of the following four categories: Single Entity/Single Relation Answers to these questions are a single entity, e.g. “what is John’s email address?”, or expressed in lambda-calculus notation: λx.EmailAddress(John, x) The answers to these questions are found in a single sentence in the narrative, although it is possible that the answer may change through the course of the narrative (e.g., “John’s new office is GHC122”). Multi-Entity/Single Relation Answers to these questions can be multiple entities but involve a single relation, e.g., “Who is enrolled in the Math class?”, or expressed in lambda calculus notation: λx.TakingClass(x, Math) Unlike the previous category, answers to these questions can be sets of entities. Multi-Entity/Two Relations Answers to these questions can be multiple entities and involve two relations, e.g., “Who is enrolled in courses that I am teaching?”, or expressed in lambda calculus: λx.∃y.EnrolledInClass(x, y) ∧CourseTaughtByMe(y) Multi-Entity/Three Relations Answers to these questions can be multiple entities and involve three relations, e.g., “Which undergraduates are enrolled in courses that I am teaching?”, or expressed in lambda calculus notation: λx.∃y.EnrolledInClass(x, y) ∧CourseTaughtByMe(y) ∧Undergrad(x) In the data that we generate, answers to questions are always sets of spans in the narrative (the reason for this constraint is for easier evaluation of several existing machine-reading models; this assumption can easily be relaxed in the simulation). In all of our evaluations, we will partition our results by one of the four question categories listed above, which we hypothesize correlates with the difficulty of a question. 4 Methods We develop several baselines for our QA task, including a logistic regression model and four different neural network models: Seq2Seq (Bahdanau et al., 2015), MemN2N (Sukhbaatar et al., 2015), BiDAF (Seo et al., 2017), and DrQA (Chen et al., 2017). These models generate answers in different ways, e.g., predicting a single entity, predicting spans of text, or generating answer sequences. Therefore, we implement two experimental settings: ENTITY and RAW. In the ENTITY setting, given a question and a story, we treat all the entity spans in the story as candidate answers, and the prediction task becomes a classification problem. In the RAW setting, a model needs to predict the answer spans. For logistic regression and MemN2N, we adopt the ENTITY setting as they are naturally classification models. This ideally provides an upper bound on the performance when considering answer candidate generation. For all the other models, we can apply the RAW setting. 4.1 Logistic Regression The logistic regression baseline predicts the likelihood of an answer candidate being a true answer. 838 For each answer candidate e and a given question, we extract the following features: (1) The frequency of e in the story; (2) The number of words within e; (3) Unigrams and bigrams within e; (4) Each non-stop question word combined with each non-stop word within e; (5) The average minimum distance between each non-stop question word and e in the story; (6) The common words (excluding stop words) between the question and the text surrounding of e (within a window of 10 words); (7) Sum of the frequencies of the common words to the left of e, to the right e, and both. These features are designed to help the model pick the correct answer spans. They have shown to be effective for answer prediction in previous work (Chen et al., 2016; Rajpurkar et al., 2016). We associate each answer candidate with a binary label indicating whether it is a true answer. We train a logistic regression classifier to produce a probability score for each answer candidate. During test, we search for an optimal threshold that maximizes the F1 performance on the validation data. During training, we optimize the cross-entropy loss using Adam (Kingma and Ba, 2014) with an initial learning rate of 0.01. We use a batch size of 10, 000 and train with 5 epochs. Training takes roughly 10 minutes for each domain on a Titan X GPU. 4.2 Seq2Seq The seq2seq model is based on the sequence to sequence model presented in (Bahdanau et al., 2015), which includes an attention model. Bahdanau et al. (Bahdanau et al., 2015) have used this model to build a neural based machine translation performing at the state-of-the-art. We adopt this model to fit our own domain by including a preprocessing step in which all statements are concatenated with a dedicated token, while eliminating all previously asked questions, and the current question is added at the end of the list of statements. The answers are treated as a sequence of words. We use word embeddings (Zou et al., 2013), as it was shown to improve accuracy. We use 3 GRU (Cho et al., 2014) connected layers, each with a capacity of 256. Our batch size was set to 16. We use gradient descent with an initial learning rate of 0.5 and a decay factor of 0.99, iterating on the data for 50, 000 steps (5 epochs). The training process for each domain took approximately 48 hours on a Titan X GPU. 4.3 MemN2N End-To-End Memory Network (MemN2N) is a neural architecture that encodes both long-term and short-term context into a memory and iteratively reads from the memory (i.e., multiple hops) relevant information to answer a question (Sukhbaatar et al., 2015). It has been shown to be effective for a variety of question answering tasks (Weston et al., 2015; Sukhbaatar et al., 2015; Hill et al., 2015). In this work, we directly apply MemN2N to our task with a small modification. Originally, MemN2N was designed to produce a single answer for a question, so at the prediction layer, it uses softmax to select the best answer from the answer candidates. In order to account for multiple answers for a given question, we modify the prediction layer to apply the logistic function and optimize the cross entropy loss instead. For training, we use the parameter setting as in a publicly available MemN2N 1 except that we set the embedding size to 300 instead of 20. We train the model for 100 epochs and it takes about 2 hours for each domain on a Titan X GPU. 4.4 BiDAF-M BiDAF (Bidirectional Attention Flow Networks) (Seo et al., 2017) is one of the topperforming models on the span-based question answering dataset SQuAD (Rajpurkar et al., 2016). We reimplement BiDAF with simplified parameterizations and change the prediction layer so that it can predict multiple answer spans. Specifically, we encode the input story {x1, ..., xT } and a given question {q1, ..., qJ} at the character level and the word level, where the character level uses CNNs and the word level uses pre-trained word vectors. The concatenation of the character and word embeddings are passed to a bidirectional LSTM to produce a contextual embedding for each word in the story context and in the question. Then, we apply the same bidirectional attention flow layer to model the interactions between the context and question embeddings, producing question-aware feature vectors for each word in the context, denoted as G ∈Rdg×T . G is then fed into a bidirectional LSTM layer to obtain a feature matrix M1 ∈Rd1×T for predicting the start offset of the answer span, and M1 is then passed into 1https://github.com/domluna/memn2n 839 Within-World MEETING HOMEWORK SOFTWARE DEPARTMENT SHOPPING Avg. F1 Logistic Regression 50.1 55.7 60.9 55.9 61.1 56.7 Seq2Seq 22.5 32.6 16.7 39.1 31.5 28.5 MemN2N 55.4 46.6 69.5 67.3 46.3 57.0 BiDAF-M 81.8 76.9 68.4 68.2 68.7 72.8 DrQA-M 81.2 83.6 79.1 76.4 76.5 79.4 Cross-World MEETING HOMEWORK SOFTWARE DEPARTMENT SHOPPING Avg. F1 Logistic Regression 9.0 9.1 11.1 9.9 7.2 9.3 Seq2Seq 8.8 3.5 1.9 5.4 2.6 4.5 MemN2N 23.6 2.9 4.7 14.6 0.07 9.2 BiDAF-M 34.0 6.9 16.1 22.2 3.9 16.6 DrQA-M 46.5 12.2 23.1 28.5 9.3 23.9 Table 3: F1 scores for different baselines evaluated on both within-world and across-world settings. another bidirectional LSTM layer to obtain a feature matrix M2 ∈Rd2×T for predicting the end offset of the answer span. We then compute two probability scores for each word i in the narrative: pstart = sigmoid(wT 1 [G; M1]) and pend = sigmoid(wT 2 [G; M1; M2]), where w1 and w2 are trainable weights. The training objective is simply the sum of cross-entropy losses for predicting the start and end indices. We use 50 1D filters for CNN character embedding, each with a width of 5. The word embedding size is 300 and the hidden dimension for LSTMs is 128. For optimization, we use Adam (Kingma and Ba, 2014) with an initial learning rate of 0.001, and use a minibatch size of 32 for 15 epochs. The training process takes roughly 20 hours for each domain on a Titan X GPU. 4.5 DrQA-M DrQA (Chen et al., 2017) is an open-domain QA system that has demonstrated strong performance on multiple QA datasets. We modify the Document Reader component of DrQA and implement it in a similar framework as BiDAF-M for fair comparisons. First, we employ the same character-level and word-level encoding layers to both the input story and a given question. We then use the concatenation of the character and word embeddings as the final embeddings for words in the story and in the question. We compute the aligned question embedding (Chen et al., 2017) as a feature vector for each word in the story and concatenate it with the story word embedding and pass it into a bidirectional LSTM to obtain the contextual embeddings E ∈Rd×T for words in the story. Another bidirectional LSTM is used to obtain the contextual embeddings for the question, and selfattention is used to compress them into one single vector q ∈Rd. The final prediction layer uses a bilinear term to compute scores for predicting the start offset: pstart = sigmoid(qT W1E) and another bilinear term for predicting the end offset: pend = sigmoid(qT W2E), where W1 and W2 are trainable weights. The training loss is the same as in BiDAF-M, and we use the same parameter setting. Training takes roughly 10 hours for each domain on a Titan X GPU. 5 Experiments We use two evaluation settings for measuring performance at this task: within-world and acrossworld. In the within-world evaluation setting, we test on the same world that the model was trained on. We then compute the precision, recall and F1 for each question and report the macro-average F1 score for questions in each world. In the acrossworld evaluation setting, the model is trained on four out of the five worlds, and tested on the remaining world. The across-world regime is obviously more challenging, as it requires the model to be able to learn to generalize to unseen relations and vocabulary. We consider the across-world evaluation setting to be the main evaluation criteria for any future models used on this dataset, as it mimics the practical requirement of any QA system used in personal assistants: it has to be able to answer questions on any new domain the user introduces to the system. 5.1 Results We draw several important observations from our results. First, we observe that more compositional questions (i.e., those that integrate multiple relations) are more challenging for most models - as 840 1 2 3 Average number of relations in query 0 10 20 30 40 50 60 70 80 F1 Within-world 1 2 3 Average number of relations in query 0 10 20 30 40 50 60 70 80 Cross-world DrQA-M Bidaf-M Logistic regression Memory networks Seq2seq Figure 3: F1 score breakdown based on the number of relations involved in the questions. all models (except Seq2seq) decrease in performance with the number of relations composed in a question (Figure 5.1). This can be in part explained by the fact that more composition questions are typically longer, and also require the model to integrate more sources of information in the narrative in order to answer them. One surprising observation from our results is that the performance on questions that ask about a single relation and have only a single answer is lower than questions that ask about a single relation but that can have multiple answers (see detailed results in the Appendix). This is in part because questions that can have multiple answers typically have canonical entities as answers (e.g., person’s name), and these entities generally repeat in the text, making it easier for the model to find the correct answer. Table 3 reports the overall (macro-average) F1 scores for different baselines. We can see that BiDAF-M and DrQA-M perform surprisingly well in the within-world evaluation even though they do not use any entity span information. In particular, DrQA-M outperforms BiDAF-M which suggests that modeling question-context interactions using simple bilinear terms have advantages over using more complex bidirectional attention flows. The lower performance of MemN2N suggests that its effectiveness on the BABI dataset does not directly transfer to our dataset. Note that the original MemN2N architecture uses simple bag-of-words and position encoding for sentences. This may work well on dataset with a simple vocabulary, for example, MemN2N performs the best in the SOFTWARE world as the SOFTWARE world has a smaller vocabulary compared to other worlds. In general, we believe that better text representations for questions and narratives can lead to improved performance. Seq2Seq model also did not perform as well. This is due to the inherent difficulty of generation and encoding long sequences. We found that it performs better when training and testing on shorter stories (limited to 30 statements). Interestingly, the logistic regression baseline performs on a par with MemN2N, but there is still a large performance gap to BiDAF-M and DrQA-M, and the gap is greater for questions that compose multiple relations. In the across-world setting, the performance of all methods dramatically decreases.2 This suggests the limitations of these methods in generalizing to unseen relations and vocabulary. The span-based models BiDAF-M and DrQA-M have an advantage in this setting as they can learn to answer questions based on the alignment between the question and the narrative. However, the low performance still suggests their limitations in transferring question answering capabilities. 6 Conclusion In this work, we have taken the first steps towards the task of multi-relational question answering expressed through personal narrative. Our hypothesis is that this task will become increasingly important as users begin to teach personal knowledge about their world to the personal assistants embedded in their devices. This task naturally synthesizes two main branches of question answering research: QA over KBs and QA over free text. One of our main contributions is a collection of diverse datasets that feature rich compositional questions over a dynamic knowledge graph expressed through simulated narrative. Another contribution of our work is a thorough set of experiments and analysis of different types of endto-end architectures for QA at their ability to answer multi-relational questions of varying degrees of compositionality. Our long-term goal is that both the data and the simulation code we release will inspire and motivate the community to look towards the vision of letting end-users teach our personal assistants about the world around us. 2In order to allow generalization across different domains for the Seq2Seq model, we replace entities appearing in each story with an id that correlates to their appearance order. After the model outputs its prediction, the entity ids are converted back to the entity phrase. 841 The TEXTWORDSQA dataset and the code can be downloaded at https://igorlabutov. github.io/textworldsqa.github.io/ 7 Acknowledgments This paper was supported in part by Verizon InMind (Azaria and Hong, 2016). One of the GPUs used in this work was donated by Nvidia. References David Ahn, Valentin Jijkoun, Gilad Mishne, Karin Müller, Maarten de Rijke, Stefan Schlobach, M Voorhees, and L Buckland. 2004. Using wikipedia at the trec qa track. In TREC. Citeseer. Amos Azaria and Jason Hong. 2016. Recommender system with personality. In RecSys, pages 207–210. Dzmitry Bahdanau, Kyunghyun Cho, and Yoshua Bengio. 2015. Neural machine translation by jointly learning to align and translate. In ICLR. Michele Banko, Eric Brill, Susan Dumais, and Jimmy Lin. 2002. Askmsr: Question answering using the worldwide web. In Proceedings of 2002 AAAI Spring Symposium on Mining Answers from Texts and Knowledge Bases, pages 7–9. Jonathan Berant, Andrew Chou, Roy Frostig, and Percy Liang. 2013. Semantic parsing on freebase from question-answer pairs. In Proceedings of the 2013 Conference on Empirical Methods in Natural Language Processing, pages 1533–1544. Danqi Chen, Jason Bolton, and Christopher D Manning. 2016. A thorough examination of the cnn/daily mail reading comprehension task. In ACL. Danqi Chen, Adam Fisch, Jason Weston, and Antoine Bordes. 2017. Reading wikipedia to answer opendomain questions. In ACL. Kyunghyun Cho, Bart Van Merriënboer, Caglar Gulcehre, Dzmitry Bahdanau, Fethi Bougares, Holger Schwenk, and Yoshua Bengio. 2014. Learning phrase representations using rnn encoder-decoder for statistical machine translation. In EMNLP. Peter Clark. 2015. Elementary school science and math tests as a driver for ai: take the aristo challenge! In AAAI, pages 4019–4021. Bhuwan Dhingra, Hanxiao Liu, Zhilin Yang, William W Cohen, and Ruslan Salakhutdinov. 2017. Gated-attention readers for text comprehension. In ACL. Matthew Dunn, Levent Sagun, Mike Higgins, Ugur Guney, Volkan Cirik, and Kyunghyun Cho. 2017. Searchqa: A new q&a dataset augmented with context from a search engine. arXiv preprint arXiv:1704.05179. Marjan Ghazvininejad, Chris Brockett, Ming-Wei Chang, Bill Dolan, Jianfeng Gao, Wen-tau Yih, and Michel Galley. 2017. A knowledge-grounded neural conversation model. In AAAI. Karl Moritz Hermann, Tomas Kocisky, Edward Grefenstette, Lasse Espeholt, Will Kay, Mustafa Suleyman, and Phil Blunsom. 2015. Teaching machines to read and comprehend. In Advances in Neural Information Processing Systems, pages 1693– 1701. Daniel Hewlett, Alexandre Lacoste, Llion Jones, Illia Polosukhin, Andrew Fandrianto, Jay Han, Matthew Kelcey, and David Berthelot. 2016. Wikireading: A novel large-scale language understanding task over wikipedia. In ACL. Felix Hill, Antoine Bordes, Sumit Chopra, and Jason Weston. 2015. The goldilocks principle: Reading children’s books with explicit memory representations. arXiv preprint arXiv:1511.02301. Mandar Joshi, Eunsol Choi, Daniel S Weld, and Luke Zettlemoyer. 2017. Triviaqa: A large scale distantly supervised challenge dataset for reading comprehension. In ACL. Rudolf Kadlec, Martin Schmid, Ondrej Bajgar, and Jan Kleindienst. 2016. Text understanding with the attention sum reader network. In ACL. Daniel Khashabi, Snigdha Chaturvedi, Michael Roth, Shyam Upadhyay, and Dan Roth. 2018a. Looking beyond the surface: A challenge set for reading comprehension over multiple sentences. In NAACL. Daniel Khashabi, Tushar Khot Ashish Sabharwal, and Dan Roth. 2018b. Question answering as global reasoning over semantic abstractions. In AAAI. Tushar Khot, Ashish Sabharwal, and Peter Clark. 2017. Answering complex questions using open information extraction. In ACL. Diederik P Kingma and Jimmy Ba. 2014. Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980. Tomáš Koˇcisk`y, Jonathan Schwarz, Phil Blunsom, Chris Dyer, Karl Moritz Hermann, Gábor Melis, and Edward Grefenstette. 2017. The narrativeqa reading comprehension challenge. arXiv preprint arXiv:1712.07040. Ankit Kumar, Ozan Irsoy, Peter Ondruska, Mohit Iyyer, James Bradbury, Ishaan Gulrajani, Victor Zhong, Romain Paulus, and Richard Socher. 2016. Ask me anything: Dynamic memory networks for natural language processing. In International Conference on Machine Learning, pages 1378–1387. Tom Kwiatkowski, Eunsol Choi, Yoav Artzi, and Luke Zettlemoyer. 2013. Scaling semantic parsers with on-the-fly ontology matching. In Proceedings of the 2013 Conference on Empirical Methods in Natural Language Processing, pages 1545–1556. 842 Guokun Lai, Qizhe Xie, Hanxiao Liu, Yiming Yang, and Eduard Hovy. 2017. Race: Large-scale reading comprehension dataset from examinations. In EMNLP. Moontae Lee, Xiaodong He, Wen-tau Yih, Jianfeng Gao, Li Deng, and Paul Smolensky. 2015. Reasoning in vector space: An exploratory study of question answering. arXiv preprint arXiv:1511.06426. Alexander Miller, Adam Fisch, Jesse Dodge, AmirHossein Karimi, Antoine Bordes, and Jason Weston. 2016. Key-value memory networks for directly reading documents. arXiv preprint arXiv:1606.03126. Nick Montfort. 2005. Twisty Little Passages: an approach to interactive fiction. Mit Press. Tri Nguyen, Mir Rosenberg, Xia Song, Jianfeng Gao, Saurabh Tiwary, Rangan Majumder, and Li Deng. 2016. Ms marco: A human generated machine reading comprehension dataset. arXiv preprint arXiv:1611.09268. Takeshi Onishi, Hai Wang, Mohit Bansal, Kevin Gimpel, and David McAllester. 2016. Who did what: A large-scale person-centered cloze dataset. arXiv preprint arXiv:1608.05457. Baolin Peng, Zhengdong Lu, Hang Li, and Kam-Fai Wong. 2015. Towards neural network-based reasoning. arXiv preprint arXiv:1508.05508. Pranav Rajpurkar, Jian Zhang, Konstantin Lopyrev, and Percy Liang. 2016. Squad: 100,000+ questions for machine comprehension of text. In EMNLP. Matthew Richardson, Christopher JC Burges, and Erin Renshaw. 2013. Mctest: A challenge dataset for the open-domain machine comprehension of text. In Proceedings of the 2013 Conference on Empirical Methods in Natural Language Processing, pages 193–203. Mrinmaya Sachan, Kumar Dubey, Eric Xing, and Matthew Richardson. 2015. Learning answerentailing structures for machine comprehension. In Proceedings of the 53rd Annual Meeting of the Association for Computational Linguistics and the 7th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), volume 1, pages 239–249. Mrinmaya Sachan and Eric Xing. 2016. Machine comprehension using rich semantic representations. In Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics (Volume 2: Short Papers), volume 2, pages 486–492. Minjoon Seo, Aniruddha Kembhavi, Ali Farhadi, and Hannaneh Hajishirzi. 2017. Bidirectional attention flow for machine comprehension. In ICLR. Sainbayar Sukhbaatar, Jason Weston, Rob Fergus, et al. 2015. End-to-end memory networks. In Advances in neural information processing systems, pages 2440–2448. Adam Trischler, Tong Wang, Xingdi Yuan, Justin Harris, Alessandro Sordoni, Philip Bachman, and Kaheer Suleman. 2016. Newsqa: A machine comprehension dataset. arXiv preprint arXiv:1611.09830. Oriol Vinyals and Quoc Le. 2015. A neural conversational model. arXiv preprint arXiv:1506.05869. Jason Weston, Antoine Bordes, Sumit Chopra, Alexander M Rush, Bart van Merriënboer, Armand Joulin, and Tomas Mikolov. 2015. Towards ai-complete question answering: A set of prerequisite toy tasks. arXiv preprint arXiv:1502.05698. Caiming Xiong, Victor Zhong, and Richard Socher. 2017. Dynamic coattention networks for question answering. In ICLR. Scott Wen-tau Yih, Ming-Wei Chang, Xiaodong He, and Jianfeng Gao. 2015. Semantic parsing via staged query graph generation: Question answering with knowledge base. In ACL. John M Zelle and Raymond J Mooney. 1996. Learning to parse database queries using inductive logic programming. In Proceedings of the national conference on artificial intelligence, pages 1050–1055. Luke S Zettlemoyer and Michael Collins. 2012. Learning to map sentences to logical form: Structured classification with probabilistic categorial grammars. arXiv preprint arXiv:1207.1420. Will Y Zou, Richard Socher, Daniel Cer, and Christopher D Manning. 2013. Bilingual word embeddings for phrase-based machine translation. In Proceedings of the 2013 Conference on Empirical Methods in Natural Language Processing, pages 1393–1398. 843 Dataset Questions Single Entity/Relation Multiple Entities Single Relation Two Relations Three Relations P R F1 P R F1 P R F1 P R F1 Logistic Regression MEETING 42.0 78.1 51.0 50.6 74.6 56.6 33.3 66.3 41.1 31.8 57.6 38.0 HOMEWORK 39.7 57.8 44.2 98.6 99.1 98.8 57.4 78.7 62.2 25.4 42.0 28.0 SOFTWARE 55.0 73.3 59.0 54.3 98.2 66.5 58.2 76.0 62.3 46.3 84.6 56.4 DEPARTMENT 42.6 65.9 48.0 59.0 82.5 65.1 38.8 52.7 41.2 42.5 64.6 46.9 SHOPPING 53.1 70.2 56.2 79.6 83.4 79.0 53.1 60.5 52.3 53.4 67.9 56.0 Average 46.5 69.1 51.7 68.4 87.6 73.2 48.2 66.8 51.8 39.9 63.3 45.1 Sequence-to-Sequence MEETING 27.9 18.3 22.1 48.1 12.1 19.3 42.1 15.0 22.1 33.7 19.7 24.8 HOMEWORK 16.3 9.0 11.6 71.9 9.3 16.4 75.3 35.9 48.6 32.9 15.6 21.1 SOFTWARE 42.5 21.5 28.5 44.8 8.5 14.2 50.0 6.3 11.2 45.5 7.4 12.7 DEPARTMENT 49.9 35.6 41.5 54.1 20.3 29.6 57.2 38.0 45.7 43.9 39.7 41.7 SHOPPING 25.8 16.0 19.8 71.3 28.2 40.5 33.3 19.3 24.4 46.9 31.4 37.6 Average 32.5 20.1 24.7 58.0 15.7 24.0 51.6 22.9 30.4 40.6 22.7 27.6 MemN2N MEETING 56.9 56.0 54.7 66.8 58.4 58.6 57.0 57.5 54.8 38.7 40.7 38.8 HOMEWORK 42.6 41.2 41.3 97.9 63.7 73.9 60.4 47.9 49.4 36.5 29.0 30.1 SOFTWARE 68.5 71.6 68.5 72.9 73.2 70.9 69.7 67.3 66.1 75.0 74.8 72.6 DEPARTMENT 56.3 74.3 61.3 78.5 87.0 80.2 59.4 76.6 63.2 57.8 74.2 61.6 SHOPPING 51.3 45.4 45.5 74.9 54.1 59.0 45.6 40.6 40.2 44.3 37.6 37.9 Average 55.1 57.7 54.3 78.2 67.3 68.5 58.4 58.0 54.8 50.4 51.3 48.2 BIDAF-M MEETING 87.6 92.4 88.2 78.6 86.1 79.2 68.9 89.6 74.6 73.9 94.4 80.0 HOMEWORK 79.9 97.4 84.5 86.8 81.0 82.4 76.4 90.0 78.9 47.0 78.5 55.5 SOFTWARE 48.0 89.4 57.4 68.5 93.6 75.8 62.4 86.1 67.5 62.7 90.9 71.3 DEPARTMENT 57.0 64.6 58.1 73.6 85.9 76.6 67.0 83.2 70.8 63.1 71.4 64.0 SHOPPING 60.5 87.1 66.9 76.7 90.9 79.8 57.1 89.0 65.8 53.2 88.5 62.0 Average 66.6 86.2 71.0 76.8 87.5 78.8 66.4 87.6 71.5 60.0 84.7 66.6 DrQA-M MEETING 77.1 94.2 81.0 80.6 95.8 85.1 68.6 95.7 76.8 64.1 97.9 74.3 HOMEWORK 88.8 97.9 91.4 85.2 80.2 81.4 85.0 94.7 87.9 51.6 85.8 60.2 SOFTWARE 72.7 96.0 78.9 78.6 93.3 82.7 79.4 89.4 80.9 66.3 93.2 74.5 DEPARTMENT 67.1 97.9 76.1 80.3 95.0 84.1 67.1 94.4 74.8 55.8 95.2 66.9 SHOPPING 71.5 93.9 77.7 86.4 94.8 88.7 62.8 91.1 71.4 62.4 90.7 69.7 Average 75.4 96.0 81.0 82.2 91.8 84.4 72.6 93.1 78.4 60.0 92.6 69.1 Table 4: Test performance at the task of question answering by question type using the within-world evaluation. 844 Dataset Questions Single Entity/Relation Across Entities Single Relation Two Relations Three Relations Logistic Regression MEETING 8.8 10.9 7.2 5.6 HOMEWORK 7.5 20.2 8.5 6.7 SOFTWARE 8.2 12.0 12.9 10.6 DEPARTMENT 7.4 14.4 9.7 6.1 SHOPPING 8.2 9.0 5.9 6.6 Average 8.0 13.3 8.8 7.1 Sequence-to-Sequence MEETING 7.4 8.1 10.0 14.0 HOMEWORK 4.2 2.9 3.1 2.3 SOFTWARE 5.0 0.6 0.9 1.1 DEPARTMENT 5.5 4.0 5.6 5.6 SHOPPING 2.5 2.6 2.3 2.8 Average 4.9 3.6 4.4 5.2 MemN2N MEETING 9.0 34.2 33.0 27.4 HOMEWORK 3.3 12.4 1.0 2.5 SOFTWARE 13.4 0.8 3.2 2.9 DEPARTMENT 12.9 20.8 13.0 9.4 SHOPPING 0.1 0.07 0.05 0.03 Average 7.8 13.7 10.1 8.4 BIDAF-M MEETING 31.1 40.2 30.4 30.0 HOMEWORK 10.4 20.3 2.3 7.8 SOFTWARE 19.2 13.4 22.7 9.1 DEPARTMENT 23.3 30.5 19.0 13.5 SHOPPING 5.6 3.2 2.6 3.4 Average 17.9 21.5 15.4 12.8 DrQA-M MEETING 44.5 58.8 33.3 37.1 HOMEWORK 19.8 30.1 5.9 9.4 SOFTWARE 26.4 23.4 24.0 19.4 DEPARTMENT 31.0 38.8 24.4 15.7 SHOPPING 19.3 2.3 6.7 7.1 Average 28.2 30.7 18.9 17.7 Table 5: Test performance (F1 score) at the task of question answering by question type using the across-world evaluation.
2018
77
Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Long Papers), pages 845–855 Melbourne, Australia, July 15 - 20, 2018. c⃝2018 Association for Computational Linguistics 845 Simple and Effective Multi-Paragraph Reading Comprehension Christopher Clark∗ University of Washington [email protected] Matt Gardner Allen Institute for Artificial Intelligence [email protected] Abstract We introduce a method of adapting neural paragraph-level question answering models to the case where entire documents are given as input. Most current question answering models cannot scale to document or multi-document input, and naively applying these models to each paragraph independently often results in them being distracted by irrelevant text. We show that it is possible to significantly improve performance by using a modified training scheme that teaches the model to ignore non-answer containing paragraphs. Our method involves sampling multiple paragraphs from each document, and using an objective function that requires the model to produce globally correct output. We additionally identify and improve upon a number of other design decisions that arise when working with document-level data. Experiments on TriviaQA and SQuAD shows our method advances the state of the art, including a 10 point gain on TriviaQA. 1 Introduction Teaching machines to answer arbitrary usergenerated questions is a long-term goal of natural language processing. For a wide range of questions, existing information retrieval methods are capable of locating documents that are likely to contain the answer. However, automatically extracting the answer from those texts remains an open challenge. The recent success of neural models at answering questions given a related paragraph (Wang et al., 2017c; Tan et al., 2017) suggests they have the potential to be a key part of ∗Work completed while interning at the Allen Institute for Artificial Intelligence a solution to this problem. Most neural models are unable to scale beyond short paragraphs, so typically this requires adapting a paragraph-level model to process document-level input. There are two basic approaches to this task. Pipelined approaches select a single paragraph from the input documents, which is then passed to the paragraph model to extract an answer (Joshi et al., 2017; Wang et al., 2017a). Confidence based methods apply the model to multiple paragraphs and return the answer with the highest confidence (Chen et al., 2017a). Confidence methods have the advantage of being robust to errors in the (usually less sophisticated) paragraph selection step, however they require a model that can produce accurate confidence scores for each paragraph. As we shall show, naively trained models often struggle to meet this requirement. In this paper we start by proposing an improved pipelined method which achieves state-of-the-art results. Then we introduce a method for training models to produce accurate per-paragraph confidence scores, and we show how combining this method with multiple paragraph selection further increases performance. Our pipelined method focuses on addressing the challenges that come with training on documentlevel data. We use a linear classifier to select which paragraphs to train and test on. Since annotating entire documents is expensive, data of this sort is typically distantly supervised, meaning only the answer text, not the answer spans, are known. To handle the noise this creates, we use a summed objective function that marginalizes the model’s output over all locations the answer text occurs. We apply this approach with a model design that integrates some recent ideas in reading comprehension models, including selfattention (Cheng et al., 2016) and bi-directional attention (Seo et al., 2016). 846 Our confidence method extends this approach to better handle the multi-paragraph setting. Previous approaches trained the model on questions paired with paragraphs that are known a priori to contain the answer. This has several downsides: the model is not trained to produce low confidence scores for paragraphs that do not contain an answer, and the training objective does not require confidence scores to be comparable between paragraphs. We resolve these problems by sampling paragraphs from the context documents, including paragraphs that do not contain an answer, to train on. We then use a shared-normalization objective where paragraphs are processed independently, but the probability of an answer candidate is marginalized over all paragraphs sampled from the same document. This requires the model to produce globally correct output even though each paragraph is processed independently. We evaluate our work on TriviaQA (Joshi et al., 2017) in the wiki, web, and unfiltered setting. Our model achieves a nearly 10 point lead over published prior work. We additionally perform an ablation study on our pipelined method, and we show the effectiveness of our multi-paragraph methods on a modified version of SQuAD (Rajpurkar et al., 2016) where only the correct document, not the correct paragraph, is known. Finally, we combine our model with a web search backend to build a demonstration end-to-end QA system1, and show it performs well on questions from the TREC question answering task (Voorhees et al., 1999). We release our code2 to facilitate future work. 2 Pipelined Method In this section we propose a pipelined QA system, where a single paragraph is selected and passed to a paragraph-level question answering model. 2.1 Paragraph Selection If there is a single source document, we select the paragraph with the smallest TF-IDF cosine distance with the question. Document frequencies are computed using the individual paragraphs within the document. If there are multiple input documents, we found it beneficial to use a linear classifier that uses the same TF-IDF score, whether the paragraph was the first in its document, how 1https://documentqa.allenai.org 2https://github.com/allenai/document-qa many tokens preceded it, and the number of question words it includes as features. The classifier is trained on the distantly supervised objective of selecting paragraphs that contain at least one answer span. On TriviaQA web, relative to truncating the document as done by prior work, this improves the chance of the selected text containing the correct answer from 83.1% to 85.1%. 2.2 Handling Noisy Labels Question: Which British general was killed at Khartoum in 1885? Answer: Gordon Context: In February 1885 Gordon returned to the Sudan to evacuate Egyptian forces. Khartoum came under siege the next month and rebels broke into the city, killing Gordon and the other defenders. The British public reacted to his death by acclaiming ‘Gordon of Khartoum’, a saint. However, historians have suggested that Gordon... Figure 1: Noisy supervision can cause many spans of text that contain the answer, but are not situated in a context that relates to the question (red), to distract the model from learning from more relevant spans (green). In a distantly supervised setup we label all text spans that match the answer text as being correct. This can lead to training the model to select unwanted answer spans. Figure 1 contains an example. To handle this difficulty, we use a summed objective function similar to the one from Kadlec et al. (2016), that optimizes the negative loglikelihood of selecting any correct answer span. The models we consider here work by independently predicting the start and end token of the answer span, so we take this approach for both predictions. For example, the objective for predicting the answer start token becomes −log P a∈A pa  where A is the set of tokens that start an answer and pi is the answer-start probability predicted by the model for token i. This objective has the advantage of being agnostic to how the model distributes probability mass across the possible answer spans, allowing the model to focus on only the most relevant spans. 2.3 Model We use a model with the following layers (shown in Figure 2): Embedding: We embed words using pretrained word vectors. We concatenate these with character-derived word embeddings, which are 847 Figure 2: High level outline of our model. produced by embedding characters using a learned embedding matrix and then applying a convolutional neural network and max-pooling. Pre-Process: A shared bi-directional GRU (Cho et al., 2014) is used to process the question and passage embeddings. Attention: The attention mechanism from the Bi-Directional Attention Flow (BiDAF) model (Seo et al., 2016) is used to build a queryaware context representation. Let hi and qj be the vector for context word i and question word j, and nq and nc be the lengths of the question and context respectively. We compute attention between context word i and question word j as: aij = w1 · hi + w2 · qj + w3 · (hi ⊙qj) where w1, w2, and w3 are learned vectors and ⊙ is element-wise multiplication. We then compute an attended vector ci for each context token as: pij = eaij Pnq j=1 eaij ci = nq X j=1 qjpij We also compute a query-to-context vector qc: mi = max 1≤j≤nq aij pi = emi Pnc i=1 emi qc = nc X i=1 hipi The final vector for each token is built by concatenating hi, ci, hi ⊙ci, and qc ⊙ci. In our model we subsequently pass the result through a linear layer with ReLU activations. Self-Attention: Next we use a layer of residual self-attention. The input is passed through another bi-directional GRU. Then we apply the same attention mechanism, only now between the passage and itself. In this case we do not use query-tocontext attention and we set aij = −inf if i = j. As before, we pass the concatenated output through a linear layer with ReLU activations. The result is then summed with the original input. Prediction: In the last layer of our model a bidirectional GRU is applied, followed by a linear layer to compute answer start scores for each token. The hidden states are concatenated with the input and fed into a second bi-directional GRU and linear layer to predict answer end scores. The softmax function is applied to the start and end scores to produce answer start and end probabilities. Dropout: We apply variational dropout (Gal and Ghahramani, 2016) to the input to all the GRUs and the input to the attention mechanisms at a rate of 0.2. 3 Confidence Method We adapt this model to the multi-paragraph setting by using the un-normalized and un-exponentiated (i.e., before the softmax operator is applied) score given to each span as a measure of the model’s confidence. For the boundary-based models we use here, a span’s score is the sum of the start and end score given to its start and end token. At test time we run the model on each paragraph and select the answer span with the highest confidence. This is the approach taken by Chen et al. (2017a). Our experiments in Section 5 show that these confidence scores can be very poor if the model is only trained on answer-containing paragraphs, as done by prior work. Table 1 contains some qualitative examples of the errors that occur. We hypothesize that there are two key sources of error. First, for models trained with the softmax objective, the pre-softmax scores for all spans can be arbitrarily increased or decreased by a constant value without changing the resulting softmax probability distribution. As a result, nothing prevents models from producing scores that are arbitrarily all larger or all smaller for one paragraph 848 Question Low Confidence Correct Extraction High Confidence Incorrect Extraction When is the Members Debate held? Immediately after Decision Time a “Members Debate” is held, which lasts for 45 minutes... ...majority of the Scottish electorate voted for it in a referendum to be held on 1 March 1979 that represented at least... How many tree species are in the rainforest? ...one 2001 study finding a quarter square kilometer (62 acres) of Ecuadorian rainforest supports more than 1,100 tree species The affected region was approximately 1,160,000 square miles (3,000,000 km2) of rainforest, compared to 734,000 square miles Who was Warsz? ....In actuality, Warsz was a 12th/13th century nobleman who owned a village located at the modern.... One of the most famous people born in Warsaw was Maria Sklodowska - Curie, who achieved international... How much did the initial LM weight in kg? The initial LM model weighed approximately 33,300 pounds (15,000 kg), and... The module was 11.42 feet (3.48 m) tall, and weighed approximately 12,250 pounds (5,560 kg) Table 1: Examples from SQuAD where a model was less confident in a correct extraction from one paragraph (left) than in an incorrect extraction from another (right). Even if the passage has no correct answer and does not contain any question words, the model assigns high confidence to phrases that match the category the question is asking about. Because the confidence scores are not well-calibrated, this confidence is often higher than the confidence assigned to correct answer spans in different paragraphs, even when those correct spans have better contextual evidence. than another. Second, if the model only sees paragraphs that contain answers, it might become too confident in heuristics or patterns that are only effective when it is known a priori that an answer exists. For example, the model might become too reliant on selecting answers that match semantic type the question is asking about, causing it be easily distracted by other entities of that type when they appear in irrelevant text. This kind of error has also been observed when distractor sentences are added to the context (Jia and Liang, 2017) We experiment with four approaches to training models to produce comparable confidence scores, shown in the following subsections. In all cases we will sample paragraphs that do not contain an answer as additional training points. 3.1 Shared-Normalization In this approach a modified objective function is used where span start and end scores are normalized across all paragraphs sampled from the same context. This means that paragraphs from the same context use a shared normalization factor in the final softmax operations. We train on this objective by including multiple paragraphs from the same context in each mini-batch. The key idea is that this will force the model to produce scores that are comparable between paragraphs, even though it does not have access to information about what other paragraphs are being considered. 3.2 Merge As an alternative to the previous method, we experiment with concatenating all paragraphs sampled from the same context together during training. A paragraph separator token with a learned embedding is added before each paragraph. 3.3 No-Answer Option We also experiment with allowing the model to select a special “no-answer” option for each paragraph. First we re-write our objective as: −log  esa Pn i=1 esi  −log egb Pn j=1 egj ! = −log esa+gb Pn i=1 Pn j=1 esi+gj ! where sj and gj are the scores for the start and end bounds produced by the model for token j, and a and b are the correct start and end tokens. We have the model compute another score, z, to represent the weight given to a “no-answer” possibility. Our revised objective function becomes: −log (1 −δ)ez + δesa+gb ez + Pn i=1 Pn j=1 esi+gj ! where δ is 1 if an answer exists and 0 otherwise. If there are multiple answer spans we use the same objective, except the numerator includes the summation over all answer start and end tokens. We compute z by adding an extra layer at the end of our model. We build input vectors by taking the summed hidden states of the RNNs used to predict the start/end token scores weighed by the start/end probabilities, and using a learned attention vector on the output of the self-attention layer. 849 These vectors are fed into a two layer network with an 80 dimensional hidden layer and ReLU activations that produces z as its only output. 3.4 Sigmoid As a final baseline, we consider training models with the sigmoid loss objective function. That is, we compute a start/end probability for each token by applying the sigmoid function to the start/end scores of each token. A cross entropy loss is used on each individual probability. The intuition is that, since the scores are being evaluated independently of one another, they are more likely to be comparable between different paragraphs. 4 Experimental Setup 4.1 Datasets We evaluate our approach on four datasets: TriviaQA unfiltered (Joshi et al., 2017), a dataset of questions from trivia databases paired with documents found by completing a web search of the questions; TriviaQA wiki, the same dataset but only including Wikipedia articles; TriviaQA web, a dataset derived from TriviaQA unfiltered by treating each question-document pair where the document contains the question answer as an individual training point; and SQuAD (Rajpurkar et al., 2016), a collection of Wikipedia articles and crowdsourced questions. 4.2 Preprocessing We note that for TriviaQA web we do not subsample as was done by Joshi et al. (2017), instead training on the all 530k training examples. We also observe that TriviaQA documents often contain many small paragraphs, so we restructure the documents by merging consecutive paragraphs together up to a target size. We use a maximum paragraph size of 400 unless stated otherwise. Paragraph separator tokens with learned embeddings are added between merged paragraphs to preserve formatting information. We are also careful to mark all spans of text that would be considered an exact match by the official evaluation script, which includes some minor text pre-processing, as answer spans, not just spans that are an exact string match with the answer text. 4.3 Sampling Our confidence-based approaches are trained by sampling paragraphs from the context during training. For SQuAD and TriviaQA web we take Model EM F1 baseline (Joshi et al., 2017) 41.08 47.40 BiDAF 50.21 56.86 BiDAF + TF-IDF 53.41 59.18 BiDAF + sum 56.22 61.48 BiDAF + TF-IDF + sum 57.20 62.44 our model + TF-IDF + sum 61.10 66.04 Table 2: Results on TriviaQA web using our pipelined method. the top four paragraphs as judged by our paragraph ranking function (see Section 2.1). We sample two different paragraphs from those four each epoch to train on. Since we observe that the higherranked paragraphs are more likely to contain the context needed to answer the question, we sample the highest ranked paragraph that contains an answer twice as often as the others. For the merge and shared-norm approaches, we additionally require that at least one of the paragraphs contains an answer span, and both of those paragraphs are included in the same mini-batch. For TriviaQA wiki we repeat the process but use the top 8 paragraphs, and for TriviaQA unfiltered we use the top 16, because much more context is given in these settings. 4.4 Implementation We train the model with the Adadelta optimizer (Zeiler, 2012) with a batch size 60 for TriviaQA and 45 for SQuAD. At test time we select the most probable answer span of length less than or equal to 8 for TriviaQA and 17 for SQuAD. The GloVe 300 dimensional word vectors released by Pennington et al. (2014) are used for word embeddings. On SQuAD, we use a dimensionality of size 100 for the GRUs and of size 200 for the linear layers employed after each attention mechanism. We found for TriviaQA, likely because there is more data, using a larger dimensionality of 140 for each GRU and 280 for the linear layers is beneficial. During training, we maintain an exponential moving average of the weights with a decay rate of 0.999. We use the weight averages at test time. We do not update the word vectors during training. 5 Results 5.1 TriviaQA Web and TriviaQA Wiki First, we do an ablation study on TriviaQA web to show the effects of our proposed methods for our pipeline model. We start with a baseline following the one used by Joshi et al. (2017). This 850 Model Web Web Verified Wiki Wiki Verified EM F1 EM F1 EM F1 EM F1 Baseline (Joshi et al., 2017) 40.74 47.06 49.54 55.80 40.32 45.91 44.86 50.71 Smarnet (Chen et al., 2017b) 40.87 47.09 51.11 55.98 42.41 48.84 50.51 55.90 Mnemonic Reader (Hu et al., 2017) 46.65 52.89 56.96 61.48 46.94 52.85 54.45 59.46 (Weissenborn et al., 2017a) 50.56 56.73 63.20 67.97 48.64 55.13 53.42 59.92 Neural Cascade (Swayamdipta et al., 2017) 53.75 58.57 63.20 66.88 51.59 55.95 58.90 62.53 S-Norm (ours) 66.37 71.32 79.97 83.70 63.99 68.93 67.98 72.88 Table 3: Published TriviaQA results. Our approach advances the state of the art by about 10 points on these datasets4 1 3 5 7 9 11 13 15 Number of Paragraphs 0.62 0.64 0.66 0.68 0.70 F1 Score TriviaQA Web F1 vs. Number of Paragraphs none sigmoid merge no-answer shared-norm Figure 3: Results on TriviaQA web when applying our models to multiple paragraphs from each document. Most of our training methods improve the model’s ability to utilize more text. system uses BiDAF (Seo et al., 2016) as the paragraph model, and selects a random answer span from each paragraph each epoch to train on. The first 400 tokens of each document are used during training, and the first 800 during testing. When using the TF-IDF paragraph selection approach, we instead break the documents into paragraphs of size 400 when training and 800 when testing, and select the top-ranked paragraph to feed into the model. As shown in Table 2, our baseline outperforms the results reported by Joshi et al. (2017) significantly, likely because we are not subsampling the data. We find both TF-IDF ranking and the sum objective to be effective. Using our refined model increases the gain by another 4 points. Next we show the results of our confidencebased approaches. For this comparison we split documents into paragraphs of at most 400 tokens, and rank them using TF-IDF cosine distance. Then we measure the performance of our proposed approaches as the model is used to independently process an increasing number of these paragraphs, and the highest confidence answer is selected as the final output. The results are shown in Figure 3. On this dataset even the model trained without any of the proposed training methods (“none”) im0 5 10 15 20 25 30 Number of Paragraphs 0.56 0.58 0.60 0.62 0.64 0.66 F1 Score Unfiltered TriviaQA F1 vs. Number of Paragraphs none sigmoid merge no-answer shared-norm Figure 4: Results for our confidence methods on TriviaQA unfiltered. The shared-norm approach is the strongest, while the baseline model starts to lose performance as more paragraphs are used. proves as more paragraphs are used, showing it does a passable job at focusing on the correct paragraph. The no-answer option training approach lead to a significant improvement, and the sharednorm and merge approaches are even better. We use the shared-norm approach for evaluation on the TriviaQA test sets. We found that increasing the paragraph size to 800 at test time, and to 600 during training, was slightly beneficial, allowing our model to reach 66.04 EM and 70.98 F1 on the dev set. As shown in Table 3, our model is firmly ahead of prior work on both the TriviaQA web and TriviaQA wiki test sets. Since our submission, a few additional entries have been added to the public leader for this dataset5, although to the best of our knowledge these results have not yet been published. 5.2 TriviaQA Unfiltered Next we apply our confidence methods to TriviaQA unfiltered. This dataset is of particular interest because the system is not told which document contains the answer, so it provides a plausible simulation of answering a question using a document 4Comparison made of 5/01/2018. 5https://competitions.codalab.org/competitions/17208 851 1 3 5 7 9 11 13 15 Number of Paragraphs 0.550 0.575 0.600 0.625 0.650 0.675 0.700 0.725 F1 Score SQuAD F1 vs. Number of Paragraphs none sigmoid merge no-answer shared-norm Figure 5: Results for our confidence methods on document-level SQuAD. The shared-norm model is the only model that does not lose performance when exposed to large numbers of paragraphs. retrieval system. We show the same graph as before for this dataset in Figure 4. Our methods have an even larger impact on this dataset, probably because there are many more relevant and irrelevant paragraphs for each question, making paragraph selection more important. Note the naively trained model starts to lose performance as more paragraphs are used, showing that errors are being caused by the model being overly confident in incorrect extractions. We achieve a score of 61.55 EM and 67.61 F1 on the dev set. This advances the only prior result reported for this dataset, 50.6 EM and 57.3 F1 from Wang et al. (2017b), by 10 points. 5.3 SQuAD We additionally evaluate our model on SQuAD. SQuAD questions were not built to be answered independently of their context paragraph, which makes it unclear how effective of an evaluation tool they can be for document-level question answering. To assess this we manually label 500 random questions from the training set. We categorize questions as: 1. Context-independent, meaning it can be understood independently of the paragraph. 2. Document-dependent, meaning it can be understood given the article’s title. For example, “What individual is the school named after?” for the document “Harvard University”. 3. Paragraph-dependent, meaning it can only be understood given its paragraph. For example, “What was the first step in the reforms?”. We find 67.4% of the questions to be contextindependent, 22.6% to be document-dependent, and the remaining 10% to be paragraphdependent. There are many document-dependent questions because questions are frequently about the subject of the document. Since a reasonably high fraction of the questions can be understood given the document they are from, and to isolate our analysis from the retrieval mechanism used, we choose to evaluate on the document-level. We build documents by concatenating all the paragraphs in SQuAD from the same article together into a single document. Given the correct paragraph (i.e., in the standard SQuAD setting) our model reaches 72.14 EM and 81.05 F1 and can complete 26 epochs of training in less than five hours. Most of our variations to handle the multi-paragraph setting caused a minor (up to half a point) drop in performance, while the sigmoid version fell behind by a point and a half. We graph the document-level performance in Figure 5. For SQuAD, we find it crucial to employ one of the suggested confidence training techniques. The base model starts to drop in performance once more than two paragraphs are used. However, the shared-norm approach is able to reach a peak performance of 72.37 F1 and 64.08 EM given 15 paragraphs. Given our estimate that 10% of the questions are ambiguous if the paragraph is unknown, our approach appears to have adapted to the document-level task very well. Finally, we compare the shared-norm model with the document-level result reported by Chen et al. (2017a). We re-evaluate our model using the documents used by Chen et al. (2017a), which consist of the same Wikipedia articles SQuAD was built from, but downloaded at different dates. The advantage of this dataset is that it does not allow the model to know a priori which paragraphs were filtered out during the construction of SQuAD. The disadvantage is that some of the articles have been edited since the questions were written, so some questions may no longer be answerable. Our model achieves 59.14 EM and 67.34 F1 on this dataset, which significantly outperforms the 49.7 EM reported by Chen et al. (2017a). 5.4 Curated TREC We perform one final experiment that tests our model as part of an end-to-end question answering system. For document retrieval, we re-implement the pipeline from Joshi et al. (2017). Given a question, we retrieve up to 10 web documents us7https://github.com/brmson/yodaqa/wiki/Benchmarks 852 Model Accuracy S-Norm (ours) 53.31 YodaQA with Bing (Baudiˇs, 2015), 37.18 YodaQA (Baudiˇs, 2015), 34.26 DrQA + DS (Chen et al., 2017a) 25.7 Table 4: Results on the Curated TREC corpus, YodaQA results extracted from its github page7 ing a Bing web search of the question, and all Wikipedia articles about entities the entity linker TAGME (Ferragina and Scaiella, 2010) identifies in the question. We then use our linear paragraph ranker to select the 16 most relevant paragraphs from all these documents, which are passed to our model to locate the final answer span. We choose to use the shared-norm model trained on the TriviaQA unfiltered dataset since it is trained using multiple web documents as input. We use the same heuristics as Joshi et al. (2017) to filter out trivia or QA websites to ensure questions cannot be trivially answered using webpages that directly address the question. A demo of the system is publicly available8. We find accuracy on the TriviaQA unfiltered questions remains almost unchanged (within half a percent exact match score) when using our document retrieval method instead of the given documents, showing our pipeline does a good job of producing evidence documents that are similar to the ones in the training data. We test the system on questions from the TREC QA tasks (Voorhees et al., 1999), in particular a curated set of questions from Baudiˇs (2015), the same dataset used in Chen et al. (2017a). We apply our system to the 694 test questions without retraining on the train questions. We compare against DrQA (Chen et al., 2017a) and YodaQA (Baudiˇs, 2015). It is important to note that these systems use different document corpora (Wikipedia for DrQA, and Wikipedia, several knowledge bases, and optionally Bing web search for YodaQA) and different training data (SQuAD and the TREC training questions for DrQA, and TREC only for YodaQA), so we cannot make assertions about the relative performance of individual components. Nevertheless, it is instructive to show how the methods we experiment with in this work can advance an end-to-end QA system. The results are listed in Table 4. Our method outperforms prior work, breaking the 50% accu8https://documentqa.allenai.org/ Category proportion Sentence reading errors 35.2 Paragraph reading errors 17.6 Document coreference errors 14.1 Part of answer extracted 7.1 Required background knowledge 5.8 Answer indirectly stated 20.2 Table 5: Error analysis on TriviaQA web. racy mark. This is a strong proof-of-concept that neural paragraph reading combined with existing document retrieval methods can advance the stateof-the-art on general question answering. It also shows that, despite the noise, the data from TriviaQA is sufficient to train models that can be effective on out-of-domain QA tasks. 5.5 Discussion We found that models that have only been trained on answer-containing paragraphs can perform very poorly in the multi-paragraph setting. The results were particularly bad for SQuAD; we think this is partly because the paragraphs are shorter, so the model had less exposure to irrelevant text. The shared-norm approach consistently outperformed the other methods, especially on SQuAD and TriviaQA unfiltered, where many paragraphs were needed to reach peak performance. Figures 3, 4, and 5 show this technique has a minimal effect on the performance when only one paragraph is used, suggesting the model’s per-paragraph performance is preserved. Meanwhile, it can be seen the accuracy of the shared-norm model never drops as more paragraphs are added, showing it successfully resolves the problem of being distracted by irrelevant text. The no-answer and merge approaches were moderately effective, we suspect because they at least expose the model to more irrelevant text. However, these methods do not address the fundamental issue of requiring confidence scores to be comparable between independent applications of the model to different paragraphs, which is why we think they lagged behind. The sigmoid objective function reduces the paragraph-level performance considerably, especially on the TriviaQA datasets. We suspect this is because it is vulnerable to label noise, as discussed in Section 2.2. 5.6 Error Analysis We perform an error analysis by labeling 200 random TriviaQA web dev-set errors made by the shared-norm model. We found 40.5% of the er853 rors were caused because the document did not contain sufficient evidence to answer the question, and 17% were caused by the correct answer not being contained in the answer key. The distribution of the remaining errors is shown in Table 5. We found quite a few cases where a sentence contained the answer, but the model was unable to extract it due to complex syntactic structure or paraphrasing. Two kinds of multi-sentence reading errors were also common: cases that required connecting multiple statements made in a single paragraph, and long-range coreference cases where a sentence’s subject was named in a previous paragraph. Finally, some questions required background knowledge, or required the model to extract answers that were only stated indirectly (e.g., examining a list to extract the nth element). Overall, these results suggest good avenues for improvement are to continue advancing the sentence and paragraph level reading comprehension abilities of the model, and adding a mechanism to handle document-level coreferences. 6 Related Work Reading Comprehension Datasets. The state of the art in reading comprehension has been rapidly advanced by neural models, in no small part due to the introduction of many large datasets. The first large scale datasets for training neural reading comprehension models used a Cloze-style task, where systems must predict a held out word from a piece of text (Hermann et al., 2015; Hill et al., 2015). Additional datasets including SQuAD (Rajpurkar et al., 2016), WikiReading (Hewlett et al., 2016), MS Marco (Nguyen et al., 2016) and TriviaQA (Joshi et al., 2017) provided more realistic questions. Another dataset of trivia questions, Quasar-T (Dhingra et al., 2017), was introduced recently that uses ClueWeb09 (Callan et al., 2009) as its source for documents. In this work we choose to focus on SQuAD because it is well studied, and TriviaQA because it is more challenging and features documents and multi-document contexts (Quasar T is similar, but was released after we started work on this project). Neural Reading Comprehension. Neural reading comprehension systems typically use some form of attention (Wang and Jiang, 2016), although alternative architectures exist (Chen et al., 2017a; Weissenborn et al., 2017b). Our model follows this approach, but includes some recent advances such as variational dropout (Gal and Ghahramani, 2016) and bi-directional attention (Seo et al., 2016). Self-attention has been used in several prior works (Cheng et al., 2016; Wang et al., 2017c; Pan et al., 2017). Our approach to allowing a reading comprehension model to produce a per-paragraph no-answer score is related to the approach used in the BiDAFT (Min et al., 2017) model to produce per-sentence classification scores, although we use an attentionbased method instead of max-pooling. Open QA. Open question answering has been the subject of much research, especially spurred by the TREC question answering track (Voorhees et al., 1999). Knowledge bases can be used, such as in (Berant et al., 2013), although the resulting systems are limited by the quality of the knowledge base. Systems that try to answer questions using natural language resources such as YodaQA (Baudiˇs, 2015) typically use pipelined methods to retrieve related text, build answer candidates, and pick a final output. Neural Open QA. Open question answering with neural models was considered by Chen et al. (2017a), where researchers trained a model on SQuAD and combined it with a retrieval engine for Wikipedia articles. Our work differs because we focus on explicitly addressing the problem of applying the model to multiple paragraphs. A pipelined approach to QA was recently proposed by Wang et al. (2017a), where a ranker model is used to select a paragraph for the reading comprehension model to process. More recent work has considered evidence aggregation techniques (Wang et al., 2017b; Swayamdipta et al., 2017). Our work shows paragraph-level models that produce well-calibrated confidence scores can effectively exploit large amounts of text without aggregation, although integrating aggregation techniques could further improve our results. 7 Conclusion We have shown that, when using a paragraph-level QA model across multiple paragraphs, our training method of sampling non-answer-containing paragraphs while using a shared-norm objective function can be very beneficial. Combining this with our suggestions for paragraph selection, using the summed training objective, and our model design allows us to advance the state of the art on TriviaQA. As shown by our demo, this work can be directly applied to building deep-learningpowered open question answering systems. 854 References Petr Baudiˇs. 2015. YodaQA: A Modular Question Answering System Pipeline. In POSTER 2015-19th International Student Conference on Electrical Engineering. Jonathan Berant, Andrew Chou, Roy Frostig, and Percy Liang. 2013. Semantic Parsing on Freebase from Question-Answer Pairs. In EMNLP. Jamie Callan, Mark Hoy, Changkuk Yoo, and Le Zhao. 2009. Clueweb09 Data Set. Danqi Chen, Adam Fisch, Jason Weston, and Antoine Bordes. 2017a. Reading Wikipedia to Answer Open-Domain Questions. arXiv preprint arXiv:1704.00051. Zheqian Chen, Rongqin Yang, Bin Cao, Zhou Zhao, Deng Cai, and Xiaofei He. 2017b. Smarnet: Teaching Machines to Read and Comprehend Like Human. arXiv preprint arXiv:1710.02772. Jianpeng Cheng, Li Dong, and Mirella Lapata. 2016. Long Short-Term Memory-Networks for Machine Reading. arXiv preprint arXiv:1601.06733. Kyunghyun Cho, Bart Van Merri¨enboer, Caglar Gulcehre, Dzmitry Bahdanau, Fethi Bougares, Holger Schwenk, and Yoshua Bengio. 2014. Learning Phrase Representations Using RNN EncoderDecoder for Statistical Machine Translation. arXiv preprint arXiv:1406.1078. Bhuwan Dhingra, Kathryn Mazaitis, and William W Cohen. 2017. Quasar: Datasets for Question Answering by Search and Reading. arXiv preprint arXiv:1707.03904. Paolo Ferragina and Ugo Scaiella. 2010. TAGME: On-the-fly Annotation of Short Text Fragments (by Wikipedia Entities). In Proceedings of the 19th ACM international conference on Information and knowledge management. Yarin Gal and Zoubin Ghahramani. 2016. A Theoretically Grounded Application of Dropout in Recurrent Neural Networks. In Advances in neural information processing systems. Karl Moritz Hermann, Tomas Kocisky, Edward Grefenstette, Lasse Espeholt, Will Kay, Mustafa Suleyman, and Phil Blunsom. 2015. Teaching Machines to Read and Comprehend. In Advances in Neural Information Processing Systems. Daniel Hewlett, Alexandre Lacoste, Llion Jones, Illia Polosukhin, Andrew Fandrianto, Jay Han, Matthew Kelcey, and David Berthelot. 2016. Wikireading: A Novel Large-scale Language Understanding Task over Wikipedia. arXiv preprint arXiv:1608.03542. Felix Hill, Antoine Bordes, Sumit Chopra, and Jason Weston. 2015. The Goldilocks Principle: Reading Children’s Books with Explicit Memory Representations. arXiv preprint arXiv:1511.02301. Minghao Hu, Yuxing Peng, and Xipeng Qiu. 2017. Mnemonic Reader: Machine Comprehension with Iterative Aligning and Multi-hop Answer Pointing. Robin Jia and Percy Liang. 2017. Adversarial Examples for Evaluating Reading Comprehension Systems. arXiv preprint arXiv:1707.07328. Mandar Joshi, Eunsol Choi, Daniel S Weld, and Luke Zettlemoyer. 2017. TriviaQA: A Large Scale Distantly Supervised Challenge Dataset for Reading Comprehension. arXiv preprint arXiv:1705.03551. Rudolf Kadlec, Martin Schmid, Ondrej Bajgar, and Jan Kleindienst. 2016. Text Understanding with the Attention Sum Reader Network. arXiv preprint arXiv:1603.01547. Sewon Min, Minjoon Seo, and Hannaneh Hajishirzi. 2017. Question Answering through Transfer Learning from Large Fine-grained Supervision Data. arXiv preprint arXiv:1702.02171. Tri Nguyen, Mir Rosenberg, Xia Song, Jianfeng Gao, Saurabh Tiwary, Rangan Majumder, and Li Deng. 2016. MS MARCO: A Human Generated MAchine Reading COmprehension Dataset. arXiv preprint arXiv:1611.09268. Boyuan Pan, Hao Li, Zhou Zhao, Bin Cao, Deng Cai, and Xiaofei He. 2017. MEMEN: Multi-layer Embedding with Memory Networks for Machine Comprehension. arXiv preprint arXiv:1707.09098. Jeffrey Pennington, Richard Socher, and Christopher D. Manning. 2014. GloVe: Global Vectors for Word Representation. In Empirical Methods in Natural Language Processing (EMNLP). Pranav Rajpurkar, Jian Zhang, Konstantin Lopyrev, and Percy Liang. 2016. SQuAD: 100,000+ Questions for Machine Comprehension of Text. arXiv preprint arXiv:1606.05250. Min Joon Seo, Aniruddha Kembhavi, Ali Farhadi, and Hannaneh Hajishirzi. 2016. Bidirectional Attention Flow for Machine Comprehension. CoRR, abs/1611.01603. Swabha Swayamdipta, Ankur P. Parikh, and Tom Kwiatkowski. 2017. Multi-Mention Learning for Reading Comprehension with Neural Cascades. Chuanqi Tan, Furu Wei, Nan Yang, Weifeng Lv, and Ming Zhou. 2017. S-Net: From Answer Extraction to Answer Generation for Machine Reading Comprehension. arXiv preprint arXiv:1706.04815. Ellen M Voorhees et al. 1999. The TREC-8 Question Answering Track Report. In Trec. Shuohang Wang and Jing Jiang. 2016. Machine Comprehension Using Match-LSTM and Answer Pointer. arXiv preprint arXiv:1608.07905. 855 Shuohang Wang, Mo Yu, Xiaoxiao Guo, Zhiguo Wang, Tim Klinger, Wei Zhang, Shiyu Chang, Gerald Tesauro, Bowen Zhou, and Jing Jiang. 2017a. R: Reinforced Reader-Ranker for Open-Domain Question Answering. arXiv preprint arXiv:1709.00023. Shuohang Wang, Mo Yu, Jing Jiang, Wei Zhang, Xiaoxiao Guo, Shiyu Chang, Zhiguo Wang, Tim Klinger, Gerald Tesauro, and Murray Campbell. 2017b. Evidence Aggregation for Answer ReRanking in Open-Domain Question Answering. Wenhui Wang, Nan Yang, Furu Wei, Baobao Chang, and Ming Zhou. 2017c. Gated Self-Matching Networks for Reading Comprehension and Question Answering. In Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), volume 1. Dirk Weissenborn, Tomas Kocisky, and Chris Dyer. 2017a. Dynamic Integration of Background Knowledge in Neural NLU Systems. arXiv preprint arXiv:1706.02596. Dirk Weissenborn, Georg Wiese, and Laura Seiffe. 2017b. FastQA: A Simple and Efficient Neural Architecture for Question Answering. arXiv preprint arXiv:1703.04816. Matthew D Zeiler. 2012. ADADELTA: an Adaptive Learning Rate Method. arXiv preprint arXiv:1212.5701.
2018
78
Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Long Papers), pages 856–865 Melbourne, Australia, July 15 - 20, 2018. c⃝2018 Association for Computational Linguistics 856 Semantically Equivalent Adversarial Rules for Debugging NLP Models Marco Tulio Ribeiro University of Washington [email protected] Sameer Singh University of California, Irvine [email protected] Carlos Guestrin University of Washington [email protected] Abstract Complex machine learning models for NLP are often brittle, making different predictions for input instances that are extremely similar semantically. To automatically detect this behavior for individual instances, we present semantically equivalent adversaries (SEAs) – semantic-preserving perturbations that induce changes in the model’s predictions. We generalize these adversaries into semantically equivalent adversarial rules (SEARs) – simple, universal replacement rules that induce adversaries on many instances. We demonstrate the usefulness and flexibility of SEAs and SEARs by detecting bugs in black-box state-of-the-art models for three domains: machine comprehension, visual questionanswering, and sentiment analysis. Via user studies, we demonstrate that we generate high-quality local adversaries for more instances than humans, and that SEARs induce four times as many mistakes as the bugs discovered by human experts. SEARs are also actionable: retraining models using data augmentation significantly reduces bugs, while maintaining accuracy. 1 Introduction With increasing complexity of models for tasks like classification (Joulin et al., 2016), machine comprehension (Rajpurkar et al., 2016; Seo et al., 2017), and visual question answering (Zhu et al., 2016), models are becoming increasingly challenging to debug, and to determine whether they are ready for deployment. In particular, these complex models are prone to brittleness: different ways of phrasing the same sentence can often cause the model to In the United States especially, several high-profile cases such as Debra LaFave, Pamela Rogers, and Mary Kay Letourneau have caused increased scrutiny on teacher misconduct. (a) Input Paragraph Q: What has been the result of this publicity? A: increased scrutiny on teacher misconduct (b) Original Question and Answer Q: What haL been the result of this publicity? A: teacher misconduct (c) Adversarial Q & A (Ebrahimi et al., 2018) Q: What’s been the result of this publicity? A: teacher misconduct (d) Semantically Equivalent Adversary Figure 1: Adversarial examples for question answering, where the model predicts the correct answer for the question and input paragraph (1a and 1b). It is possible to fool the model by adversarially changing a single character (1c), but at the cost of making the question nonsensical. A Semantically Equivalent Adversary (1d) results in an incorrect answer while preserving semantics. output different predictions. While held-out accuracy is often useful, it is not sufficient: practitioners consistently overestimate their model’s generalization (Patel et al., 2008) since test data is usually gathered in the same manner as training and validation. When deployed, these seemingly accurate models encounter sentences that are written very differently than the ones in the training data, thus making them prone to mistakes, and fragile with respect to distracting additions (Jia and Liang, 2017). These problems are exacerbated by the variability in language, and by cost and noise in annotations, making such bugs challenging to detect and fix. A particularly challenging issue is oversensitivity (Jia and Liang, 2017): a class of bugs where models output different predictions for very similar inputs. These bugs are prevalent in image classifi857 Transformation Rules #Flips (WP is→WP’s) 70 (1%) (?→??) 202(3%) (a) Example Rules Original: What is the oncorhynchus also called? A: chum salmon Changed: What’s the oncorhynchus also called? A: keta (b) Example for (WP is→WP’s) Original: How long is the Rhine? A: 1,230 km Changed: How long is the Rhine?? A: more than 1,050,000 (c) Example for (?→??) Figure 2: Semantically Equivalent Adversarial Rules: For the task of question answering, the proposed approach identifies transformation rules for questions in (a) that result in paraphrases of the queries, but lead to incorrect answers (#Flips is the number of times this happens in the validation data). We show examples of rephrased questions that result in incorrect answers for the two rules in (b) and (c). cation (Szegedy et al., 2014), a domain where one can measure the magnitude of perturbations, and many small-magnitude changes are imperceptible to the human eye. For text, however, a single word addition can change semantics (e.g. adding “not”), or have no semantic impact for the task at hand. Inspired by adversarial examples for images, we introduce semantically equivalent adversaries (SEAs) – text inputs that are perturbed in semantics-preserving ways, but induce changes in a black box model’s predictions (example in Figure 1). Producing such adversarial examples systematically can significantly aid in debugging ML models, as it allows users to detect problems that happen in the real world, instead of oversensitivity only to malicious attacks such as intentionally scrambling, misspelling, or removing words (Bansal et al., 2014; Ebrahimi et al., 2018; Li et al., 2016). While SEAs describe local brittleness (i.e. are specific to particular predictions), we are also interested in bugs that affect the model more globally. We represent these via simple replacement rules that induce SEAs on multiple predictions, such as in Figure 2, where a simple contraction of “is”after Wh pronouns (what, who, whom) (2b) makes 70 (1%) of the previously correct predictions of the model “flip” (i.e. become incorrect). Perhaps more surprisingly, adding a simple “?” induces mistakes in 3% of examples. We call such rules semantically equivalent adversarial rules (SEARs). In this paper, we present SEAs and SEARs, designed to unveil local and global oversensitivity bugs in NLP models. We first present an approach to generate semantically equivalent adversaries, based on paraphrase generation techniques (Lapata et al., 2017), that is model-agnostic (i.e. works for any black box model). Next, we generalize SEAs into semantically equivalent rules, and outline the properties for optimal rule sets: semantic equivalence, high adversary count, and non-redundancy. We frame the problem of finding such a set as a submodular optimization problem, leading to an accurate yet efficient algorithm. Including the human into the loop, we demonstrate via user studies that SEARs help users uncover important bugs on a variety of state-of-the-art models for different tasks (sentiment classification, visual question answering). Our experiments indicate that SEAs and SEARs make humans significantly better at detecting impactful bugs – SEARs uncover bugs that cause 3 to 4 times more mistakes than human-generated rules, in much less time. Finally, we show that SEARs are actionable, enabling the human to close the loop by fixing the discovered bugs using a data augmentation procedure. 2 Semantically Equivalent Adversaries Consider a black box model f that takes a sentence x and makes a prediction f(x), which we want to debug. We identify adversaries by generating paraphrases of x, and getting predictions from f until the original prediction is changed. Given an indicator function SemEq(x, x′) that is 1 if x is semantically equivalent to x′ and 0 otherwise, we define a semantically equivalent adversary (SEA) as a semantically equivalent instance that changes the model prediction in Eq (1). Such adversaries are important in evaluating the robustness of f, as each is an undesirable bug. SEA(x, x′)=1  SemEq(x, x′)∧f(x)̸=f(x′)  (1) While there are various ways of scoring semantic similarity between pairs of texts based on embeddings (Le and Mikolov, 2014; Wieting and Gimpel, 2017), they do not explicitly penalize unnatural sentences, and generating sentences requires surrounding context (Le and Mikolov, 2014) or training a separate model. We turn instead to paraphrasing based on neural machine translation (Lapata et al., 2017), where P(x′|x) (the probability of a paraphrase x′ given original sentence x) is proportional to translating x into multiple pivot languages 858 and then taking the score of back-translating the translations into the original language. This approach scores semantics and “plausibility” simultaneously (as translation models have “built in” language models) and allows for easy paraphrase generation, by linearly combining the paths of each back-decoder when back-translating. Unfortunately, given source sentences x and z, P(x′|x) is not comparable to P(z′|z), as each has a different normalization constant, and heavily depends on the shape of the distribution around x or z. If there are multiple perfect paraphrases near x, they will all share probability mass, while if there is a paraphrase much better than the rest near z, it will have a higher score than the ones near x, even if the paraphrase quality is the same. We thus define the semantic score S(x, x′) as a ratio between the probability of a paraphrase and the probability of the sentence itself: S(x, x′) = min  1, P(x′|x) P(x|x)  (2) We define SemEq(x, x′) = 1[S(x, x′) ≥τ], i.e. x′ is semantically equivalent to x if the similarity score between x and x′ is greater than some threshold τ (which we crowdsource in Section 5). In order to generate adversaries, we generate a set of paraphrases Πx around x via beam search and get predictions on Πx using the black box model until an adversary is found, or until S(x, x′) < τ. We may be interested in the best adversary for a particular instance, i.e. argmaxx′∈Πx S(x, x′)SEAx(x′), or we may consider multiple SEAs for generalization purposes. We illustrate this process in Figure 3, where we generate SEAs for a VQA model by generating paraphrases around the question, and checking when the model prediction changes. The first two adversaries with highest S(x, x′) are semantically equivalent, the third maintains the semantics enough for it to be a useful adversary, and the fourth is ungrammatical and thus not useful. 3 Semantically Equivalent Adversarial Rules (SEARs) While finding the best adversary for a particular instance is useful, humans may not have time or patience to examine too many SEAs, and may not be able to generalize well from them in order to understand and fix the most impactful bugs. In this section, we address the problem of generalizing local adversaries into Semantically Equivalent What color is the tray? Pink What colour is the tray? Green Which color is the tray? Green What color is it? Green How color is tray? Green Figure 3: Visual QA Adversaries: Paraphrasing questions to find adversaries for the original question (top, in bold) asked of a given image. Adversaries are sorted by decreasing semantic similarity. Adversarial Rules for Text (SEARs), search and replace rules that produce semantic adversaries with little or no change in semantics, when applied to a corpus of sentences. Assuming that humans have limited time, and are thus willing to look at B rules, we propose a method for selecting such a set of rules given a reference dataset X. A rule takes the form r = (a→c), where the first instance of the antecedent a is replaced by the consequent c for every instance that includes a, as we previously illustrated in Figure 2a. The output after applying rule r on a sentence x is represented as the function call r(x), e.g. if r =(movie→film), r(“Great movie!”) = “Great film!”. Proposing a set of rules: In order to generalize a SEA x′ into a candidate rule, we must represent the changes that took place from x →x′. We will use x = “What color is it?” and x′ = “Which color is it?” from Figure 4 as a running example. One approach is exact matching: selecting the minimal contiguous sequence that turns x into x′, (What→Which) in the example. Such changes may not always be semantics preserving, so we also propose further rules by including the immediate context (previous and/or next word with respect to the sequence), e.g. (What color→Which color). Adding such context, however, may make rules very specific, thus restricting their value. To allow for generalization, we also represent the antecedent of proposed rules by a product of their raw text with coarse and fine-grained Part-of-Speech tags, and allow these tags to happen in the consequent if they match the antecedent. In the running example, we would propose rules like (What color→Which color), (What NOUN→Which NOUN), (WP color→Which color), etc. We generate SEAs and propose rules for every x ∈X, which gives us a set of candidate rules (second box in Figure 4, for loop in Algorithm 1). 859 Figure 4: SEAR process. (1) SEAs are generalized into candidate rules, (2) rules that are not semantically equivalent are filtered out, e.g. r5: (What→Which), (3) rules are selected according to Eq (3), in order to maximize coverage and avoid redundancy (e.g. rejecting r2, valuing r1 more highly than r4), and (4) a user vets selected rules and keeps the ones that they think are bugs. Selecting a set of rules: Given a set of candidate rules, we want to select a set R such that |R| ≤B, and the following properties are met: 1. Semantic Equivalence: Application of the rules in the set should produce semantically equivalent instances. This is equivalent to considering rules that have a high probability of inducing semantically equivalent instances when applied, i.e. E[SemEq(x, r(x))] ≥1−δ. This is the Filter step in Algorithm 1. For example, consider the rule (What→Which) in Fig 4 which produces some semantically equivalent instances, but also produces many instances that are unnatural (e.g. “What is he doing?” →“Which is he doing?”), and is thus filtered out by this criterion. 2. High adversary count: The rules in the set should induce as many SEAs as possible in validation data. Furthermore, each of the induced SEAs should have as high of a semantic similarity score as possible, i.e. for each rule r ∈R we want to maximize P x∈X S(x, r(x))SEA(x, r(x)). In Figure 4, r1 induces more and more similar mistakes when compared to r4, and is thus superior to r4. 3. Non-redundancy: Different rules in the set may induce the same SEAs, or may induce different SEAs for the same instances. Ideally, rules in the set should cover as many instances in the validation as possible, rather than focus on a small set of fragile predictions. Furthermore, rules should not be repetitive to the user. In Figure 4 (mid), r1 covers a superset of r2’s adversaries, making r2 completely redundant and thus not included in R. Properties 2 and 3 combined suggest a weighted coverage problem, where a rule r covers an instance x if SEA(x, r(x)), the weight of the connection being given by S(x, r(x)). We thus want to Algorithm 1 Generating SEARs for a model Require: Classifier f, Correct instances X Require: Hyperparameters, δ, τ, Budget B R ←{}{Set of rules} for all x ∈X do X′ = GenParaphrases(X, τ) A ←{x′ ∈X′ | f(x) ̸= f(x′)} {SEAs; §2} R ←R ∪Rules(A) end for R ←Filter(R, δ, τ) {Remove low scoring SEARs} R ←SubMod(R, B) {high count / score, diverse } return R find the set of semantically equivalent rules that: max R,|R|<B X x∈X max r∈R S(x, r(x))SEA(x, r(x)) (3) While Eq (3) is NP-hard, the objective is monotone submodular (Krause and Golovin, 2014), and thus a greedy algorithm that iteratively adds the rule with the highest marginal gain offers a constantfactor approximation guarantee of 1 −1/e to the optimum. This is the SubMod procedure in Algorithm 1, represented pictorially in Figure 4, where the output is a set of rules given to a human, who judges if they are really bugs or not. 4 Illustrative Examples Before evaluating the utility of SEAs and SEARs with user studies, we show examples in state-of-theart models for different tasks. Note that we treat these models as black boxes, not using internals or gradients in any way when discovering these bugs. Machine Comprehension: We take the AllenNLP (Gardner et al., 2017) implementation of BiDaF (Seo et al., 2017) for Machine Comprehension, and display some high coverage SEARs for it in Table 1 (also, Figures 1 and 2a). For each rule, 860 SEAR Questions / SEAs f(x) Flips What is What’s the NASUWT? Trade unions 2% What VBZ → Teachers in Wales What’s What is What’s a Hauptlied? main hymn Veni redemptor gentium What resource Which resource coal wool 1% What NOUN → was mined in the Newcastle area? Which NOUN What health Which health nervous breakdown problem did Tesla have in 1879? relations What was So what was Satyagraha 2% What VERB → Ghandi’s work called? Civil Disobedience So what VERB What is So what is a new trend Co-teaching in teaching? educational institutions What did And what did Tesla an induction motor 2% What VBD → develop in 1887? laboratory And what VBD What was And what was journalist sleep Kenneth Swezey’s job? Table 1: SEARs for Machine Comprehension SEAR Questions / SEAs f(x) Flips WP VBZ→ What has What’s been cut? Cake Pizza 3.3% WP’s Who is Who’s holding the baby Woman Man What NOUN→ What Which kind of floor is it? Wood Marble 3.9% Which NOUN What Which color is the jet? Gray White color →colour What color colour is the tray? Pink Green 2.2% What color colour is the jet? Gray Blue ADV is→ Where is Where’s the jet? Sky Airport 2.1% ADV’s How is How’s the desk? Messy Empty Table 2: SEARs for Visual QA we display two example questions with the corresponding SEA, the prediction (with corresponding change) and the percentage of “flips” - instances previously predicted correctly on the validation data, but predicted incorrectly after the application of the rule. The rule (What VBZ→What’s) generalizes the SEA on Figure 1, and shows that the model is fragile with respect to contractions (flips 2% of all correctly predicted instances on the validation data). The second rule uncovers a bug with respect to simple question rephrasing, while the third and fourth rules show that the model is not robust to a more conversational style of asking questions. Visual QA: We show SEARs for a state-of-theart visual question-answering model (Zhu et al., 2016) in Table 2. Even though the contexts are different (paragraphs for machine comprehension, images for VQA), it is interesting that both models display similar bugs. The fact that VQA is fragile to “Which” questions is because questions of this form are not in the training set, while (color→colour) probably stems from an American bias in data collection. Changes induced by these four rules flip more than 10% of the predictions in the validation data, which is of critical concern if the model is being evaluated for production. SEAR Reviews / SEAs f(x) Flips movie → Yeah, the movie film pretty much sucked . Neg Pos 2% film This is not movie film making . Neg Pos film → Excellent film movie . Pos Neg 1% movie I’ll give this film movie 10 out of 10 ! Pos Neg is →was Ray Charles is was legendary . Pos Neg 4% It is was a really good show to watch . Pos Neg this →that Now this that is a movie I really dislike . Neg Pos 1% The camera really likes her in this that movie. Pos Neg DET NOUN is The movie is It is terrible Neg Pos 1% →it is The dialog is It is atrocious Neg Pos Table 3: SEARs for Sentiment Analysis Sentiment Analysis: Finally, in Table 3 we display SEARs for a fastText (Joulin et al., 2016) model for sentiment analysis trained on movie reviews. Surprisingly, many of its predictions change for perturbations that have no sentiment connotations, even in the presence of polarity-laden words. 5 User Studies We compare automatically discovered SEAs and SEARs to user-generated adversaries and rules, and propose a way to fix the bugs induced by SEARs. Our evaluation benchmark includes two tasks: visual question answering (VQA) and sentiment analysis on movie review sentences. We choose these tasks because a human can quickly look at a prediction and judge if it is correct or incorrect, can easily perturb instances, and judge if two instances in a pair are semantically equivalent or not. Since our focus is debugging, throughout the experiment we only considered SEAs and SEARs on examples that are originally predicted correctly (i.e. every adversary is also by construction a mistake). The user interfaces for all experiments in this section are included in the supplementary material. 5.1 Implementation Details The paraphrasing model (Lapata et al., 2017) requires translation models to and from different languages. We train neural machine translation models using the default parameters of OpenNMTpy (Klein et al., 2017) for English↔Portuguese and English↔French models, on 2 million and 1 million parallel sentences (respectively) from EuroParl, news, and other sources (Tiedemann, 2012). We use the spacy library (http://spacy.io) for POS tagging. For SEAR generation, we set δ = 0.1 (i.e. at least 90% equivalence). We generate a set of candidate adversaries as described in Section 2, and ask mechanical turkers to judge them 861 Human vs SEA Human vs HSEA Neither 145 (48%) 127 (42%) Only Human 47 (16%) 38 (13%) Only SEA 54 (18%) 72 (24%) Both 54 (18%) 63 (21%) (a) Visual Question-Answering Human vs SEA Human vs HSEA Neither 177 (59%) 161 (54%) Only Human 45 (15%) 40 (13%) Only SEA 47 (16%) 63 (21%) Both 31 (10%) 36 (12%) (b) Sentiment Analysis Table 4: Finding Semantically Equivalent Adversaries: we compare how often humans produce semantics-preserving adversaries, when compared to our automatically generated adversaries (SEA, left) and our adversaries filtered by humans (HSEA, right). There are four possible outcomes: neither produces a semantic equivalent adversary (i.e. they either do not produce an adversary or the adversary produced is not semantically equivalent), both do, or only one is able to do so. for semantic equivalence. Using these evaluations, we identify τ = 0.0008 as the value that minimizes the entropy in the induced splits, and use it for the remaining experiments. Source code and pretrained language models are available at https: //github.com/marcotcr/sears. For VQA, we use the multiple choice telling system and dataset of Zhu et al. (2016), using their implementation, with default parameters. The training data consists of questions that begin with “What”, “Where”, “When”, “Who”, “Why”, and “How”. The task is multiple choice, with four possible answers per instance. For sentiment analysis, we train a fastText (Joulin et al., 2016) model with unigrams and bigrams (embedding size of 50) on RottenTomato movie reviews (Pang and Lee, 2005), and evaluate it on IMDB sentence-sized reviews (Kotzias et al., 2015), simulating the common case where a model trained on a public dataset is applied to new data from a similar domain. 5.2 Can humans find good adversaries? In this experiment, we compare our method for generating SEAs with user’s ability to discover semantic-preserving adversaries. We take a random sample of 100 correctly-predicted instances for each task. In the first condition (human), we display each instance to 3 Amazon Mechanical Turk workers, and give them 10 attempts at creating semantically equivalent adversaries (with immediate feedback as to whether or not their attempts changed the prediction). Next, we ask them to choose the adversary that is semantically closest to the original instance, out of the candidates they generated. In the second condition (SEA), we generate adversaries for each of the instances, and pick the best adversary according to the semantic scorer. The third condition (HSEA) is a collaboration between our method and humans: we take the top 5 adversaries ranked by S(x, x′), and ask workers to pick the one closest to the original instance, rather than asking them to generate the adversaries. To evaluate whether the proposed adversaries are semantically equivalent, we ask a separate set of workers to evaluate the similarity between each adversary and the original instance (with the image as context for VQA), on a scale of 1 (completely unrelated) to 5 (exactly the same meaning). Each adversary is evaluated by at least 10 workers, and considered equivalent if the median score ≥4. We thus obtain 300 comparisons between human and SEA, and 300 between human and HSEA. The results in Table 4a and 4b are consistent across tasks: both models are susceptible to SEAs for a large fraction of predictions, and our fully automated method is able to produce SEAs as often as humans (left columns). On the other hand, asking humans to choose from generated SEAs (HSEA) yields much better results than asking humans to generate them (right columns), or using the highest scored SEA. The semantic scorer does make mistakes, so the top adversary is not always semantically equivalent, but a good quality SEA is often in the top 5, and is easily identified by users. On both datasets, the automated method or humans were able to generate adversaries at the exclusion of the other roughly one third of the time, which indicates that they do not generate the same adversaries. Humans generate paraphrases differently than our method: the average character edit distance of our SEAs is 6.2 for VQA and 9.0 for Sentiment, while for humans it is 18.1 and 43.3, respectively. This is illustrated by examples in Table 5 - in Table 5a we see examples where very compact changes generate adversaries (humans were not able to find these changes though). The examples in Table 5b indicate that humans can generate adversaries that: (1) make use of the visual context in VQA, which our method does not, and (2) sig862 Dataset Original SEA VQA Where are the men? Where are the males? What kind of meat is on the boy’s plate? What sort of meat is on the boy’s plate? Sentiment They are so easy to love, but even more easy to identify with. They’re so easy to love, but even more easy to identify with. Today the graphics are crap. Today, graphics are bullshit. (a) Automatically generated adversaries, examples where humans failed to generate SEAs (Only SEA) Dataset Original Human-generated SEA VQA How many suitcases? How many suitcases are sitting on the shelf? Where is the blue van? What is the blue van’s location? Sentiment (very serious spoilers) this movie was a huge disappointment. serious spoilers this movie did not deliver what I hoped Also great directing and photography. Photography and directing were on point. (b) Human generated adversaries, examples where our approach failed to generate SEAs (Only Human) Table 5: Examples of generated adversaries nificantly change the sentence structure, which the translation-based semantic scorer does not. 5.3 Can experts find high-impact bugs? Here we investigate whether experts are able to detect high-impact global bugs, i.e. devise rules that flip many predictions, and compare them to generated SEARs. Instead of AMT workers, we have 26 expert subjects: students, graduates, or professors who have taken at least a graduate course in machine learning or NLP1. The experiment setup is as follows: for each task, subjects are given an interface where they see examples in the validation data, perturb those examples, and get predictions. The interface also allows them to create search and replace rules, with immediate feedback on how many mistakes are induced by their rules. They also see the list of examples where the rules apply, so they can verify semantic equivalence. Subjects are instructed to try to maximize the number of mistakes induced in the validation data (i.e. maximize “mistake coverage”), but only through semantically equivalent rules. They can try as many rules as they like, and are asked to select the best set of at most 10 rules at the end. This is quite a challenging task for humans (yet another reason to prefer algorithmic approaches), but we are not aware of any existing automated methods. Finally, we in1We have an IRB/consent form, and personal information was only collected as needed to compensate subjects VQA Sentiment 0.0 2.5 5.0 7.5 10.0 12.5 15.0 17.5 % of correct predictions flipped 3.0% 3.3% 14.2% 10.9% 15.7% 12.5% Experts SEARs Combined Figure 5: Mistakes induced by expert-generated rules (green), SEARs (blue), and a combination of both (pink), with standard error bars. VQA Sentiment 0.0 2.5 5.0 7.5 10.0 12.5 15.0 17.5 Minutes 16.9 12.9 10.1 5.4 Discovering rules Evaluating SEARs Figure 6: Time for users to create rules (green) and to evaluate SEARs (blue), with standard error bars struct subjects they could finish each task in about 15 minutes (some took longer, some ended earlier), in order to keep the total time reasonable. After creating their rules for VQA and sentiment analysis, the subjects evaluate 20 SEARs (one rule at a time) for each task, and accept only semantically equivalent rules. When a subject rejects a rule, we recompute the remaining set according to Eq (3) in real time. If a subject accepts more than 10 rules, only the first 10 are considered, in order to ensure a fair comparison against the expert-generated rules. We compare expert-generated rules with accepted SEARs (each subject’s rules are compared to the SEARs they accepted) in terms of the percentage of the correct predictions that “flip” when the rules are applied. This is what we asked the subjects to maximize, and all the rules were ones deemed to be semantic equivalent by the subjects themselves. We also consider the union of expertgenerated rules and accepted SEARs. The results in Figure 5 show that on both datasets, the filtered SEARs induce a much higher rate of mistakes than the rules the subjects themselves created, with a small increase when the union of both sets is taken. Furthermore, subjects spent less time evaluating 863 Error rate Validation Sensitivity Visual QA Original Model 44.4.% 12.6% SEAR Augmented 45.7 % 1.4% Sentiment Analysis Original Model 22.1% 12.6% SEAR Augmented 21.3% 3.4% Table 6: Fixing bugs using SEARs: Effect of retraining models using SEARs, both on original validation and on sensitivity dataset. Retraining significantly reduces the number of bugs, with statistically insignificant changes to accuracy. SEARs than trying to create their own rules (Figure 6). SEARs for sentiment analysis contain fewer POS tags, and are thus easier to evaluate for semantic equivalence than for VQA. Discovering these bugs is hard for humans (even experts) without SEARs: not only do they need to imagine rules that maintain semantic equivalence, they must also discover the model’s weak spots. Making good use of POS tags is also a challenge: only 50% of subjects attempt rules with POS tags for VQA, 36% for sentiment analysis. Experts accepted 8.69 rules (on average) out of 20 for VQA as semantically equivalent, and 17.32 out of 20 for sentiment analysis. Similar to the previous experiment, errors made by the semantic scorer lead to rules that are not semantically equivalent (e.g. Table 7). With minimal human intervention, however, SEARs vastly outperform human experts in finding impactful bugs. 5.4 Fixing bugs using SEARs Once such bugs are discovered, it is natural to want to fix them. The global and deterministic nature of SEARs make them actionable, as they represent bugs in a systematic manner. Once impactful bugs are identified, we use a simple data augmentation procedure: applying SEARs to the training data, and retraining the model on the original training augmented with the generated examples. We take the rules that are accepted by ≥20 subjects as accepted bugs, a total of 4 rules (in Table 2) for VQA, and 16 rules for sentiment (including ones in Table 3). We then augment the training data by applying these rules to it, and retrain the models. To check if the bugs are still present, we create a sensitivity dataset by applying these SEARs to instances predicted correctly on the validation. A model not prone to the bugs described by these rules should not change any of its predictions, and should thus have error rate 0% on this sensitivity data. We also measure accuracy on the original validation data, to make sure that our bug-fixing procedure is not decreasing accuracy. Table 6 shows that the incidence of these errors is greatly reduced after augmentation, with negligible changes to the validation accuracy (on both tasks, the changes are consistent with the effect of retraining with different seeds). These results show that SEARs are useful not only for discovering bugs, but are also actionable through a simple augmentation technique for any model. 6 Related Work Previous work on debugging primarily focuses on explaining predictions in validation data in order to uncover bugs (Ribeiro et al., 2016, 2018; Kulesza et al., 2011), or find labeling errors (Zhang et al., 2018; Koh and Liang, 2017). Our work is complementary to these techniques, as they provide no mechanism to detect oversensitivity bugs. We are able to uncover these bugs even when they are not present in the data, since we generate sentences. Adversarial examples for image recognition are typically indistinguishable to the human eye (Szegedy et al., 2014). These are more of a security concern than bugs per se, as images with adversarial noise are not “natural”, and not expected to occur in the real world outside of targeted attacks. Adversaries are usually specific to predictions, and even universal adversarial perturbations (Moosavi-Dezfooli et al., 2017) are not natural, semantically meaningful to humans, or actionable. “Imperceptible” adversarial noise does not carry over from images to text, as adding or changing a single word in a sentence can drastically alter its meaning. Jia and Liang (2017) recognize that a true analog to detect oversensitivity would need semantic-preserving perturbations, but do not pursue an automated solution due to the difficulty of paraphrase generation. Their adversaries are whole sentence concatenations, generated by manually defined rules tailored to reading comprehension, and each adversary is specific to an individual instance. Zhao et al. (2018) generate natural text adversaries by projecting the input data to a latent space using a generative adversarial networks (GANs), and searching for adversaries close to the original instance in this latent space. Apart from the challenge of training GANs to generate high 864 quality text, there is no guarantee that an example close in the latent space is semantically equivalent. Ebrahimi et al. (2018), along with proposing character-level changes that are not semanticpreserving, also propose a heuristic that replaces single words adversarially to preserve semantics. This approach not only depends on having whitebox access to the model, but is also not able to generate many adversaries (only ∼1.6% for sentiment analysis, compare to ∼33% for SEAs in Table 4b). Developed concurrently with our work, Iyyer et al. (2018) proposes a neural paraphrase model based on back-translated data, which is able to produce paraphrases that have different sentence structures from the original. They use paraphrases to generate adversaries and try to post-process nonsensical outputs, but they do not explicitly reject non-semantics preserving ones, nor do they try to induce rules from individual adversaries. In any case, their adversaries are also useful for data augmentation, in experiments similar to ours. In summary, previous work on text adversaries change semantics, only generate local (instancespecific) adversaries (Zhao et al., 2018; Iyyer et al., 2018), or are tailored for white-box models (Ebrahimi et al., 2018) or specific tasks (Jia and Liang, 2017). In contrast, SEAs expose oversensitivity for specific predictions of black-box models for a variety of tasks, while SEARs are intuitive and actionable global rules that induce a high number of high-quality adversaries. To our knowledge, we are also the first to evaluate human performance in adversarial generation (semantics-preserving or otherwise), and our extensive evaluation shows that SEAs and SEARs detect individual bugs and general patterns better than humans can. 7 Limitations and Future Work Having demonstrated the usefulness of SEAs and SEARs in a variety of domains, we now describe their limitations and opportunities for future work. Semantic scoring errors: Paraphrasing is still an active area of research, and thus our semantic scorer is sometimes incorrect in evaluating rules for semantic equivalence. We show examples of SEARs that are rejected by users in Table 7 – the semantic scorer does not sufficiently penalize preposition changes, and is biased towards common terms. The presence of such errors is why we still need humans in the loop to accept or reject SEARs. SEAR Questions / SEAs f(x) on →in What is on in the background? A building Mountains What is on? in Lights The television VBP→is Where are is the water bottles Table Vending Maching Where are is the people gathered living room kitchen VERB on → What is on the background? A building Mountains VERB What are the planes parked on? Concrete landing strip Table 7: SEARs for VQA that are rejected by users Other paraphrase limitations: Paraphrase models based on neural machine translation are biased towards maintaining the sentence structure, and thus do not produce certain adversaries (e.g. Table 5b), which recent work on paraphrasing (Iyyer et al., 2018) or generation using GANs (Zhao et al., 2018) may address. More critically, existing models are inaccurate for long texts, restricting SEAs and SEARs to sentences. Better bug fixing: Our data augmentation has the human users accept/reject rules based on whether or not they preserve semantics. Developing more effective ways of leveraging the expert’s time to close the loop, and facilitating more interactive collaboration between humans and SEARs are exciting areas for future work. 8 Conclusion We introduced SEAs and SEARs – adversarial examples and rules that preserve semantics, while causing models to make mistakes. We presented examples of such bugs discovered in state-of-theart models for various tasks, and demonstrated via user studies that non-experts and experts alike are much better at detecting local and global bugs in NLP models by using our methods. We also close the loop by proposing a simple data augmentation solution that greatly reduced oversensitivity while maintaining accuracy. We demonstrated that SEAs and SEARs can be an invaluable tool for debugging NLP models, while indicating their current limitations and avenues for future work. Acknowledgements We are grateful to Dan Weld, Robert L. Logan IV, and to the anonymous reviewers for their feedback. This work was supported in part by ONR award #N00014-13-1-0023, in part by NSF award #IIS1756023, and in part by funding from FICO. The views expressed are of the authors and do not reflect the policy or opinion of the funding agencies. 865 References Aayush Bansal, Ali Farhadi, and Devi Parikh. 2014. Towards transparent systems: Semantic characterization of failure modes. In European Conference on Computer Vision (ECCV). Javid Ebrahimi, Anyi Rao, Daniel Lowd, and Deijing Dou. 2018. HotFlip: White-Box Adversarial Examples for NLP. In Annual Meeting of the Association for Computational Linguistics (ACL). Matt A Gardner, Joel Grus, Mark Neumann, Oyvind Tafjord, Pradeep Dasigi, Nelson F. Liu, Matthew Peters, Michael Schmitz, and Luke S. Zettlemoyer. 2017. Allennlp: A deep semantic natural language processing platform. Mohit Iyyer, John Wieting, Kevin Gimpel, and Luke Zettlemoyer. 2018. Adversarial example generation with syntactically controlled paraphrase networks. In North American Association for Computational Linguistics (NAACL). Robin Jia and Percy Liang. 2017. Adversarial examples for evaluating reading comprehension systems. In Empirical Methods in Natural Language Processing (EMNLP). Armand Joulin, Edouard Grave, Piotr Bojanowski, and Tomas Mikolov. 2016. Bag of tricks for efficient text classification. arXiv preprint arXiv:1607.01759 . Guillaume Klein, Yoon Kim, Yuntian Deng, Jean Senellart, and Alexander M. Rush. 2017. Opennmt: Open-source toolkit for neural machine translation. In Annual Meeting of the Association for Computational Linguistics (ACL). Pang Wei Koh and Percy Liang. 2017. Understanding black-box predictions via influence functions. In International Conference on Machine Learning (ICML). Dimitrios Kotzias, Misha Denil, Nando de Freitas, and Padhraic Smyth. 2015. From group to individual labels using deep features. In Knowledge Discovery and Data Mining (KDD). Andreas Krause and Daniel Golovin. 2014. Submodular function maximization. In Tractability: Practical Approaches to Hard Problems. Todd Kulesza, Simone Stumpf, Weng-Keen Wong, Margaret M. Burnett, Stephen Perona, Andrew Jensen Ko, and Ian Oberst. 2011. Whyoriented end-user debugging of naive bayes text classification. TiiS 1:2:1–2:31. Mirella Lapata, Rico Sennrich, and Jonathan Mallinson. 2017. Paraphrasing revisited with neural machine translation. In European Chapter of the ACL (EACL). Quoc V. Le and Tomas Mikolov. 2014. Distributed representations of sentences and documents. In International Conference on Machine Learning (ICML). Jiwei Li, Will Monroe, and Daniel Jurafsky. 2016. Understanding neural networks through representation erasure. CoRR abs/1612.08220. Seyed-Mohsen Moosavi-Dezfooli, Alhussein Fawzi, Omar Fawzi, and Pascal Frossard. 2017. Universal adversarial perturbations. IEEE Conference on Computer Vision and Pattern Recognition (CVPR) . Bo Pang and Lillian Lee. 2005. Seeing stars: Exploiting class relationships for sentiment categorization with respect to rating scales. In Annual Meeting of the Association for Computational Linguistics (ACL). Kayur Patel, James Fogarty, James A. Landay, and Beverly Harrison. 2008. Investigating statistical machine learning as a tool for software development. In Human Factors in Computing Systems (CHI). Pranav Rajpurkar, Jian Zhang, Konstantin Lopyrev, and Percy Liang. 2016. Squad: 100, 000+ questions for machine comprehension of text. In Empirical Methods in Natural Language Processing (EMNLP). Marco Tulio Ribeiro, Sameer Singh, and Carlos Guestrin. 2016. "Why Should I Trust You?": Explaining the Predictions of Any Classifier. In Knowledge Discovery and Data Mining (KDD). Marco Tulio Ribeiro, Sameer Singh, and Carlos Guestrin. 2018. Anchors: High-precision modelagnostic explanations. In AAAI Conference on Artificial Intelligence. Min Joon Seo, Aniruddha Kembhavi, Ali Farhadi, and Hannaneh Hajishirzi. 2017. Bidirectional attention flow for machine comprehension. In International Conference on Learning Representations (ICLR). Christian Szegedy, Wojciech Zaremba, Ilya Sutskever, Joan Bruna, Dumitru Erhan, Ian Goodfellow, and Rob Fergus. 2014. Intriguing properties of neural networks. In International Conference on Learning Representations (ICLR). Jörg Tiedemann. 2012. Parallel data, tools and interfaces in opus. In International Conference on Language Resources and Evaluation (LREC). John Wieting and Kevin Gimpel. 2017. Revisiting recurrent networks for paraphrastic sentence embeddings. In Annual Meeting of the Association for Computational Linguistics (ACL). Xuezhou Zhang, Xiaojin Zhu, and Stephen Wright. 2018. Training set debugging using trusted items. In AAAI Conference on Artificial Intelligence. Zhengli Zhao, Dheeru Dua, and Sameer Singh. 2018. Generating natural adversarial examples. In International Conference on Learning Representations (ICLR). Yuke Zhu, Oliver Groth, Michael Bernstein, and Li FeiFei. 2016. Visual7W: Grounded Question Answering in Images. In IEEE Conference on Computer Vision and Pattern Recognition (CVPR).
2018
79
Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Long Papers), pages 76–86 Melbourne, Australia, July 15 - 20, 2018. c⃝2018 Association for Computational Linguistics 76 The Best of Both Worlds: Combining Recent Advances in Neural Machine Translation Mia Xu Chen ∗ Orhan Firat ∗ Ankur Bapna ∗ Melvin Johnson Wolfgang Macherey George Foster Llion Jones Niki Parmar Noam Shazeer Ashish Vaswani Jakob Uszkoreit Lukasz Kaiser Mike Schuster Zhifeng Chen miachen,orhanf,ankurbpn,[email protected] Google AI Yonghui Wu Macduff Hughes Abstract The past year has witnessed rapid advances in sequence-to-sequence (seq2seq) modeling for Machine Translation (MT). The classic RNN-based approaches to MT were first out-performed by the convolutional seq2seq model, which was then outperformed by the more recent Transformer model. Each of these new approaches consists of a fundamental architecture accompanied by a set of modeling and training techniques that are in principle applicable to other seq2seq architectures. In this paper, we tease apart the new architectures and their accompanying techniques in two ways. First, we identify several key modeling and training techniques, and apply them to the RNN architecture, yielding a new RNMT+ model that outperforms all of the three fundamental architectures on the benchmark WMT’14 English→French and English→German tasks. Second, we analyze the properties of each fundamental seq2seq architecture and devise new hybrid architectures intended to combine their strengths. Our hybrid models obtain further improvements, outperforming the RNMT+ model on both benchmark datasets. 1 Introduction In recent years, the emergence of seq2seq models (Kalchbrenner and Blunsom, 2013; Sutskever et al., 2014; Cho et al., 2014) has revolutionized the field of MT by replacing traditional phrasebased approaches with neural machine translation (NMT) systems based on the encoder-decoder paradigm. In the first architectures that surpassed ∗Equal contribution. the quality of phrase-based MT, both the encoder and decoder were implemented as Recurrent Neural Networks (RNNs), interacting via a soft-attention mechanism (Bahdanau et al., 2015). The RNN-based NMT approach, or RNMT, was quickly established as the de-facto standard for NMT, and gained rapid adoption into large-scale systems in industry, e.g. Baidu (Zhou et al., 2016), Google (Wu et al., 2016), and Systran (Crego et al., 2016). Following RNMT, convolutional neural network based approaches (LeCun and Bengio, 1998) to NMT have recently drawn research attention due to their ability to fully parallelize training to take advantage of modern fast computing devices. such as GPUs and Tensor Processing Units (TPUs) (Jouppi et al., 2017). Well known examples are ByteNet (Kalchbrenner et al., 2016) and ConvS2S (Gehring et al., 2017). The ConvS2S model was shown to outperform the original RNMT architecture in terms of quality, while also providing greater training speed. Most recently, the Transformer model (Vaswani et al., 2017), which is based solely on a selfattention mechanism (Parikh et al., 2016) and feed-forward connections, has further advanced the field of NMT, both in terms of translation quality and speed of convergence. In many instances, new architectures are accompanied by a novel set of techniques for performing training and inference that have been carefully optimized to work in concert. This ‘bag of tricks’ can be crucial to the performance of a proposed architecture, yet it is typically under-documented and left for the enterprising researcher to discover in publicly released code (if any) or through anecdotal evidence. This is not simply a problem for reproducibility; it obscures the central scientific question of how much of the observed gains come from the new architecture 77 and how much can be attributed to the associated training and inference techniques. In some cases, these new techniques may be broadly applicable to other architectures and thus constitute a major, though implicit, contribution of an architecture paper. Clearly, they need to be considered in order to ensure a fair comparison across different model architectures. In this paper, we therefore take a step back and look at which techniques and methods contribute significantly to the success of recent architectures, namely ConvS2S and Transformer, and explore applying these methods to other architectures, including RNMT models. In doing so, we come up with an enhanced version of RNMT, referred to as RNMT+, that significantly outperforms all individual architectures in our setup. We further introduce new architectures built with different components borrowed from RNMT+, ConvS2S and Transformer. In order to ensure a fair setting for comparison, all architectures were implemented in the same framework, use the same pre-processed data and apply no further post-processing as this may confound bare model performance. Our contributions are three-fold: 1. In ablation studies, we quantify the effect of several modeling improvements (including multi-head attention and layer normalization) as well as optimization techniques (such as synchronous replica training and labelsmoothing), which are used in recent architectures. We demonstrate that these techniques are applicable across different model architectures. 2. Combining these improvements with the RNMT model, we propose the new RNMT+ model, which significantly outperforms all fundamental architectures on the widely-used WMT’14 En→Fr and En→De benchmark datasets. We provide a detailed model analysis and comparison of RNMT+, ConvS2S and Transformer in terms of model quality, model size, and training and inference speed. 3. Inspired by our understanding of the relative strengths and weaknesses of individual model architectures, we propose new model architectures that combine components from the RNMT+ and the Transformer model, and achieve better results than both individual architectures. We quickly note two prior works that provided empirical solutions to the difficulty of training NMT architectures (specifically RNMT). In (Britz et al., 2017) the authors systematically explore which elements of NMT architectures have a significant impact on translation quality. In (Denkowski and Neubig, 2017) the authors recommend three specific techniques for strengthening NMT systems and empirically demonstrated how incorporating those techniques improves the reliability of the experimental results. 2 Background In this section, we briefly discuss the commmonly used NMT architectures. 2.1 RNN-based NMT Models - RNMT RNMT models are composed of an encoder RNN and a decoder RNN, coupled with an attention network. The encoder summarizes the input sequence into a set of vectors while the decoder conditions on the encoded input sequence through an attention mechanism, and generates the output sequence one token at a time. The most successful RNMT models consist of stacked RNN encoders with one or more bidirectional RNNs (Schuster and Paliwal, 1997; Graves and Schmidhuber, 2005), and stacked decoders with unidirectional RNNs. Both encoder and decoder RNNs consist of either LSTM (Hochreiter and Schmidhuber, 1997; Gers et al., 2000) or GRU units (Cho et al., 2014), and make extensive use of residual (He et al., 2015) or highway (Srivastava et al., 2015) connections. In Google-NMT (GNMT) (Wu et al., 2016), the best performing RNMT model on the datasets we consider, the encoder network consists of one bi-directional LSTM layer, followed by 7 uni-directional LSTM layers. The decoder is equipped with a single attention network and 8 uni-directional LSTM layers. Both the encoder and the decoder use residual skip connections between consecutive layers. In this paper, we adopt GNMT as the starting point for our proposed RNMT+ architecture. 2.2 Convolutional NMT Models - ConvS2S In the most successful convolutional sequence-tosequence model (Gehring et al., 2017), both the encoder and decoder are constructed by stacking multiple convolutional layers, where each layer 78 contains 1-dimensional convolutions followed by a gated linear units (GLU) (Dauphin et al., 2016). Each decoder layer computes a separate dotproduct attention by using the current decoder layer output and the final encoder layer outputs. Positional embeddings are used to provide explicit positional information to the model. Following the practice in (Gehring et al., 2017), we scale the gradients of the encoder layers to stabilize training. We also use residual connections across each convolutional layer and apply weight normalization (Salimans and Kingma, 2016) to speed up convergence. We follow the public ConvS2S codebase1 in our experiments. 2.3 Conditional Transformation-based NMT Models - Transformer The Transformer model (Vaswani et al., 2017) is motivated by two major design choices that aim to address deficiencies in the former two model families: (1) Unlike RNMT, but similar to the ConvS2S, the Transformer model avoids any sequential dependencies in both the encoder and decoder networks to maximally parallelize training. (2) To address the limited context problem (limited receptive field) present in ConvS2S, the Transformer model makes pervasive use of selfattention networks (Parikh et al., 2016) so that each position in the current layer has access to information from all other positions in the previous layer. The Transformer model still follows the encoder-decoder paradigm. Encoder transformer layers are built with two sub-modules: (1) a selfattention network and (2) a feed-forward network. Decoder transformer layers have an additional cross-attention layer sandwiched between the selfattention and feed-forward layers to attend to the encoder outputs. There are two details which we found very important to the model’s performance: (1) Each sublayer in the transformer (i.e. self-attention, crossattention, and the feed-forward sub-layer) follows a strict computation sequence: normalize →transform →dropout→residual-add. (2) In addition to per-layer normalization, the final encoder output is again normalized to prevent a blow up after consecutive residual additions. In this paper, we follow the latest version of the 1https://github.com/facebookresearch/fairseq-py Transformer model in the Tensor2Tensor2 codebase. 2.4 A Theory-Based Characterization of NMT Architectures From a theoretical point of view, RNNs belong to the most expressive members of the neural network family (Siegelmann and Sontag, 1995)3. Possessing an infinite Markovian structure (and thus an infinite receptive fields) equips them to model sequential data (Elman, 1990), especially natural language (Grefenstette et al., 2015) effectively. In practice, RNNs are notoriously hard to train (Hochreiter, 1991; Bengio et al., 1994; Hochreiter et al., 2001), confirming the well known dilemma of trainability versus expressivity. Convolutional layers are adept at capturing local context and local correlations by design. A fixed and narrow receptive field for each convolutional layer limits their capacity when the architecture is shallow. In practice, this weakness is mitigated by stacking more convolutional layers (e.g. 15 layers as in the ConvS2S model), which makes the model harder to train and demands meticulous initialization schemes and carefully designed regularization techniques. The transformer network is capable of approximating arbitrary squashing functions (Hornik et al., 1989), and can be considered a strong feature extractor with extended receptive fields capable of linking salient features from the entire sequence. On the other hand, lacking a memory component (as present in the RNN models) prevents the network from modeling a state space, reducing its theoretical strength as a sequence model, thus it requires additional positional information (e.g. sinusoidal positional encodings). Above theoretical characterizations will drive our explorations in the following sections. 3 Experiment Setup We train our models on the standard WMT’14 En→Fr and En→De datasets that comprise 36.3M and 4.5M sentence pairs, respectively. Each sentence was encoded into a sequence of sub-word units obtained by first tokenizing the sentence with the Moses tokenizer, then splitting tokens into subword units (also known as “wordpieces”) using the approach described in (Schuster and Nakajima, 2012). 2https://github.com/tensorflow/tensor2tensor 3Assuming that data complexity is satisfied. 79 Figure 1: Model architecture of RNMT+. On the left side, the encoder network has 6 bidirectional LSTM layers. At the end of each bidirectional layer, the outputs of the forward layer and the backward layer are concatenated. On the right side, the decoder network has 8 unidirectional LSTM layers, with the first layer used for obtaining the attention context vector through multi-head additive attention. The attention context vector is then fed directly into the rest of the decoder layers as well as the softmax layer. We use a shared vocabulary of 32K sub-word units for each source-target language pair. No further manual or rule-based post processing of the output was performed beyond combining the subword units to generate the targets. We report all our results on newstest 2014, which serves as the test set. A combination of newstest 2012 and newstest 2013 is used for validation. To evaluate the models, we compute the BLEU metric on tokenized, true-case output.4 For each training run, we evaluate the model every 30 minutes on the dev set. Once the model converges, we determine the best window based on the average dev-set BLEU score over 21 consecutive evaluations. We report the mean test score and standard deviation over the selected window. This allows us to compare model architectures based on their mean performance after convergence rather than individual checkpoint evaluations, as the latter can be quite noisy for some models. To enable a fair comparison of architectures, we use the same pre-processing and evaluation methodology for all our experiments. We refrain from using checkpoint averaging (exponential moving averages of parameters) (JunczysDowmunt et al., 2016) or checkpoint ensembles (Jean et al., 2015; Chen et al., 2017) to focus on 4This procedure is used in the literature to which we compare (Gehring et al., 2017; Wu et al., 2016). evaluating the performance of individual models. 4 RNMT+ 4.1 Model Architecture of RNMT+ The newly proposed RNMT+ model architecture is shown in Figure 1. Here we highlight the key architectural choices that are different between the RNMT+ model and the GNMT model. There are 6 bidirectional LSTM layers in the encoder instead of 1 bidirectional LSTM layer followed by 7 unidirectional layers as in GNMT. For each bidirectional layer, the outputs of the forward layer and the backward layer are concatenated before being fed into the next layer. The decoder network consists of 8 unidirectional LSTM layers similar to the GNMT model. Residual connections are added to the third layer and above for both the encoder and decoder. Inspired by the Transformer model, pergate layer normalization (Ba et al., 2016) is applied within each LSTM cell. Our empirical results show that layer normalization greatly stabilizes training. No non-linearity is applied to the LSTM output. A projection layer is added to the encoder final output.5 Multi-head additive attention is used instead of the single-head attention in the GNMT model. Similar to GNMT, we use the 5Additional projection aims to reduce the dimensionality of the encoder output representations to match the decoder stack dimension. 80 bottom decoder layer and the final encoder layer output after projection for obtaining the recurrent attention context. In addition to feeding the attention context to all decoder LSTM layers, we also feed it to the softmax by concatenating it with the layer input. This is important for both the quality of the models with multi-head attention and the stability of the training process. Since the encoder network in RNMT+ consists solely of bi-directional LSTM layers, model parallelism is not used during training. We compensate for the resulting longer per-step time with increased data parallelism (more model replicas), so that the overall time to reach convergence of the RNMT+ model is still comparable to that of GNMT. We apply the following regularization techniques during training. • Dropout: We apply dropout to both embedding layers and each LSTM layer output before it is added to the next layer’s input. Attention dropout is also applied. • Label Smoothing: We use uniform label smoothing with an uncertainty=0.1 (Szegedy et al., 2015). Label smoothing was shown to have a positive impact on both Transformer and RNMT+ models, especially in the case of RNMT+ with multi-head attention. Similar to the observations in (Chorowski and Jaitly, 2016), we found it beneficial to use a larger beam size (e.g. 16, 20, etc.) during decoding when models are trained with label smoothing. • Weight Decay: For the WMT’14 En→De task, we apply L2 regularization to the weights with λ = 10−5. Weight decay is only applied to the En→De task as the corpus is smaller and thus more regularization is required. We use the Adam optimizer (Kingma and Ba, 2014) with β1 = 0.9, β2 = 0.999, ϵ = 10−6 and vary the learning rate according to this schedule: lr = 10−4 · min  1 + t · (n −1) np , n, n · (2n) s−nt e−s  (1) Here, t is the current step, n is the number of concurrent model replicas used in training, p is the number of warmup steps, s is the start step of the exponential decay, and e is the end step of the decay. Specifically, we first increase the learning rate linearly during the number of warmup steps, keep it a constant until the decay start step s, then exponentially decay until the decay end step e, and keep it at 5 · 10−5 after the decay ends. This learning rate schedule is motivated by a similar schedule that was successfully applied in training the Resnet-50 model with a very large batch size (Goyal et al., 2017). In contrast to the asynchronous training used for GNMT (Dean et al., 2012), we train RNMT+ models with synchronous training (Chen et al., 2016). Our empirical results suggest that when hyper-parameters are tuned properly, synchronous training often leads to improved convergence speed and superior model quality. To further stabilize training, we also use adaptive gradient clipping. We discard a training step completely if an anomaly in the gradient norm value is detected, which is usually an indication of an imminent gradient explosion. More specifically, we keep track of a moving average and a moving standard deviation of the log of the gradient norm values, and we abort a step if the norm of the gradient exceeds four standard deviations of the moving average. 4.2 Model Analysis and Comparison In this section, we compare the results of RNMT+ with ConvS2S and Transformer. All models were trained with synchronous training. RNMT+ and ConvS2S were trained with 32 NVIDIA P100 GPUs while the Transformer Base and Big models were trained using 16 GPUs. For RNMT+, we use sentence-level crossentropy loss. Each training batch contained 4096 sentence pairs (4096 source sequences and 4096 target sequences). For ConvS2S and Transformer models, we use token-level cross-entropy loss. Each training batch contained 65536 source tokens and 65536 target tokens. For the GNMT baselines on both tasks, we cite the largest BLEU score reported in (Wu et al., 2016) without reinforcement learning. Table 1 shows our results on the WMT’14 En→Fr task. Both the Transformer Big model and RNMT+ outperform GNMT and ConvS2S by about 2 BLEU points. RNMT+ is slightly better than the Transformer Big model in terms of its mean BLEU score. RNMT+ also yields a much lower standard deviation, and hence we observed much less fluctuation in the training curve. It takes approximately 3 days for the Transformer 81 Base model to converge, while both RNMT+ and the Transformer Big model require about 5 days to converge. Although the batching schemes are quite different between the Transformer Big and the RNMT+ model, they have processed about the same amount of training samples upon convergence. Model Test BLEU Epochs Training Time GNMT 38.95 ConvS2S 7 39.49 ± 0.11 62.2 438h Trans. Base 39.43 ± 0.17 20.7 90h Trans. Big 8 40.73 ± 0.19 8.3 120h RNMT+ 41.00 ± 0.05 8.5 120h Table 1: Results on WMT14 En→Fr. The numbers before and after ‘±’ are the mean and standard deviation of test BLEU score over an evaluation window. Note that Transformer models are trained using 16 GPUs, while ConvS2S and RNMT+ are trained using 32 GPUs. Table 2 shows our results on the WMT’14 En→De task. The Transformer Base model improves over GNMT and ConvS2S by more than 2 BLEU points while the Big model improves by over 3 BLEU points. RNMT+ further outperforms the Transformer Big model and establishes a new state of the art with an averaged value of 28.49. In this case, RNMT+ converged slightly faster than the Transformer Big model and maintained much more stable performance after convergence with a very small standard deviation, which is similar to what we observed on the En-Fr task. Table 3 summarizes training performance and model statistics. The Transformer Base model 6Since the ConvS2S model convergence is very slow we did not explore further tuning on En→Fr, and validated our implementation on En→De. 7The BLEU scores for Transformer model are slightly lower than those reported in (Vaswani et al., 2017) due to four differences: 1) We report the mean test BLEU score using the strategy described in section 3. 2) We did not perform checkpoint averaging since it would be inconsistent with our evaluation for other models. 3) We avoided any manual post-processing, like unicode normalization using Moses replace-unicode-punctuation.perl or output tokenization using Moses tokenizer.perl, to rule out its effect on the evaluation. We observed a significant BLEU increase (about 0.6) on applying these post processing techniques. 4) In (Vaswani et al., 2017), reported BLEU scores are calculated using mteval-v13a.pl from Moses, which re-tokenizes its input. Model Test BLEU Epochs Training Time GNMT 24.67 ConvS2S 25.01 ±0.17 38 20h Trans. Base 27.26 ± 0.15 38 17h Trans. Big 27.94 ± 0.18 26.9 48h RNMT+ 28.49 ± 0.05 24.6 40h Table 2: Results on WMT14 En→De. Note that Transformer models are trained using 16 GPUs, while ConvS2S and RNMT+ are trained using 32 GPUs. is the fastest model in terms of training speed. RNMT+ is slower to train than the Transformer Big model on a per-GPU basis. However, since the RNMT+ model is quite stable, we were able to offset the lower per-GPU throughput with higher concurrency by increasing the number of model replicas, and hence the overall time to convergence was not slowed down much. We also computed the number of floating point operations (FLOPs) in the model’s forward path as well as the number of total parameters for all architectures (cf. Table 3). RNMT+ requires fewer FLOPs than the Transformer Big model, even though both models have a comparable number of parameters. Model Examples/s FLOPs Params ConvS2S 80 15.7B 263.4M Trans. Base 160 6.2B 93.3M Trans. Big 50 31.2B 375.4M RNMT+ 30 28.1B 378.9M Table 3: Performance comparison. Examples/s are normalized by the number of GPUs used in the training job. FLOPs are computed assuming that source and target sequence length are both 50. 5 Ablation Experiments In this section, we evaluate the importance of four main techniques for both the RNMT+ and the Transformer Big models. We believe that these techniques are universally applicable across different model architectures, and should always be employed by NMT practitioners for best performance. We take our best RNMT+ and Transformer Big models and remove each one of these techniques independently. By doing this we hope to learn two things about each technique: (1) How much does 82 it affect the model performance? (2) How useful is it for stable training of other techniques and hence the final model? Model RNMT+ Trans. Big Baseline 41.00 40.73 - Label Smoothing 40.33 40.49 - Multi-head Attention 40.44 39.83 - Layer Norm. * * - Sync. Training 39.68 * Table 4: Ablation results of RNMT+ and the Transformer Big model on WMT’14 En →Fr. We report average BLEU scores on the test set. An asterisk ’*’ indicates an unstable training run (training halts due to non-finite elements). From Table 4 we draw the following conclusions about the four techniques: • Label Smoothing We observed that label smoothing improves both models, leading to an average increase of 0.7 BLEU for RNMT+ and 0.2 BLEU for Transformer Big models. • Multi-head Attention Multi-head attention contributes significantly to the quality of both models, resulting in an average increase of 0.6 BLEU for RNMT+ and 0.9 BLEU for Transformer Big models. • Layer Normalization Layer normalization is most critical to stabilize the training process of either model, especially when multi-head attention is used. Removing layer normalization results in unstable training runs for both models. Since by design, we remove one technique at a time in our ablation experiments, we were unable to quantify how much layer normalization helped in either case. To be able to successfully train a model without layer normalization, we would have to adjust other parts of the model and retune its hyper-parameters. • Synchronous training Removing synchronous training has different effects on RNMT+ and Transformer. For RNMT+, it results in a significant quality drop, while for the Transformer Big model, it causes the model to become unstable. We also notice that synchronous training is only successful when coupled with a tailored learning rate schedule that has a warmup stage at the beginning (cf. Eq. 1 for RNMT+ and Eq. 2 for Transformer). For RNMT+, removing this warmup stage during synchronous training causes the model to become unstable. 6 Hybrid NMT Models In this section, we explore hybrid architectures that shed some light on the salient behavior of each model family. These hybrid models outperform the individual architectures on both benchmark datasets and provide a better understanding of the capabilities and limitations of each model family. 6.1 Assessing Individual Encoders and Decoders In an encoder-decoder architecture, a natural assumption is that the role of an encoder is to build feature representations that can best encode the meaning of the source sequence, while a decoder should be able to process and interpret the representations from the encoder and, at the same time, track the current target history. Decoding is inherently auto-regressive, and keeping track of the state information should therefore be intuitively beneficial for conditional generation. We set out to study which family of encoders is more suitable to extract rich representations from a given input sequence, and which family of decoders can make the best of such rich representations. We start by combining the encoder and decoder from different model families. Since it takes a significant amount of time for a ConvS2S model to converge, and because the final translation quality was not on par with the other models, we focus on two types of hybrids only: Transformer encoder with RNMT+ decoder and RNMT+ encoder with Transformer decoder. Encoder Decoder En→Fr Test BLEU Trans. Big Trans. Big 40.73 ± 0.19 RNMT+ RNMT+ 41.00 ± 0.05 Trans. Big RNMT+ 41.12 ± 0.16 RNMT+ Trans. Big 39.92 ± 0.21 Table 5: Results for encoder-decoder hybrids. From Table 5, it is clear that the Transformer encoder is better at encoding or feature extraction than the RNMT+ encoder, whereas RNMT+ is better at decoding or conditional language modeling, confirming our intuition that a stateful de83 coder is beneficial for conditional language generation. 6.2 Assessing Encoder Combinations Next, we explore how the features extracted by an encoder can be further enhanced by incorporating additional information. Specifically, we investigate the combination of transformer layers with RNMT+ layers in the same encoder block to build even richer feature representations. We exclusively use RNMT+ decoders in the following architectures since stateful decoders show better performance according to Table 5. We study two mixing schemes in the encoder (see Fig. 2): (1) Cascaded Encoder: The cascaded encoder aims at combining the representational power of RNNs and self-attention. The idea is to enrich a set of stateful representations by cascading a feature extractor with a focus on vertical mapping, similar to (Pascanu et al., 2013; Devlin, 2017). Our best performing cascaded encoder involves fine tuning transformer layers stacked on top of a pre-trained frozen RNMT+ encoder. Using a pre-trained encoder avoids optimization difficulties while significantly enhancing encoder capacity. As shown in Table 6, the cascaded encoder improves over the Transformer encoder by more than 0.5 BLEU points on the WMT’14 En→Fr task. This suggests that the Transformer encoder is able to extract richer representations if the input is augmented with sequential context. (2) Multi-Column Encoder: As illustrated in Fig. 2b, a multi-column encoder merges the outputs of several independent encoders into a single combined representation. Unlike a cascaded encoder, the multi-column encoder enables us to investigate whether an RNMT+ decoder can distinguish information received from two different channels and benefit from its combination. A crucial operation in a multi-column encoder is therefore how different sources of information are merged into a unified representation. Our best multi-column encoder performs a simple concatenation of individual column outputs. The model details and hyperparameters of the above two encoders are described in Appendix A.5 and A.6. As shown in Table 6, the multi-column encoder followed by an RNMT+ decoder achieves better results than the Transformer and the RNMT model on both WMT’14 benchmark tasks. Model En→Fr BLEU En→De BLEU Trans. Big 40.73 ± 0.19 27.94 ± 0.18 RNMT+ 41.00 ± 0.05 28.49 ± 0.05 Cascaded 41.67 ± 0.11 28.62 ± 0.06 MultiCol 41.66 ± 0.11 28.84 ± 0.06 Table 6: Results for hybrids with cascaded encoder and multi-column encoder. (a) Cascaded Encoder (b) Multi-Column Encoder Figure 2: Vertical and horizontal mixing of Transformer and RNMT+ components in an encoder. 7 Conclusion In this work we explored the efficacy of several architectural and training techniques proposed in recent studies on seq2seq models for NMT. We demonstrated that many of these techniques are broadly applicable to multiple model architectures. Applying these new techniques to RNMT models yields RNMT+, an enhanced RNMT model that significantly outperforms the three fundamental architectures on WMT’14 En→Fr and En→De tasks. We further presented several hybrid models developed by combining encoders and decoders from the Transformer and RNMT+ models, and empirically demonstrated the superiority of the Transformer encoder and the RNMT+ decoder in comparison with their counterparts. We then enhanced the encoder architecture by horizontally and vertically mixing components borrowed from these architectures, leading to hybrid architectures that obtain further improvements over RNMT+. We hope that our work will motivate NMT researchers to further investigate generally applicable training and optimization techniques, and that our exploration of hybrid architectures will open paths for new architecture search efforts for NMT. 84 Our focus on a standard single-language-pair translation task leaves important open questions to be answered: How do our new architectures compare in multilingual settings, i.e., modeling an interlingua? Which architecture is more efficient and powerful in processing finer grained inputs and outputs, e.g., characters or bytes? How transferable are the representations learned by the different architectures to other tasks? And what are the characteristic errors that each architecture makes, e.g., linguistic plausibility? Acknowledgments We would like to thank the entire Google Brain Team and Google Translate Team for their foundational contributions to this project. We would also like to thank the entire Tensor2Tensor development team for their useful inputs and discussions. References Lei Jimmy Ba, Ryan Kiros, and Geoffrey E. Hinton. 2016. Layer normalization. CoRR abs/1607.06450. http://arxiv.org/abs/1607.06450. Dzmitry Bahdanau, Kyunghyun Cho, and Yoshua Bengio. 2015. Neural machine translation by jointly learning to align and translate. In International Conference on Learning Representations. http://arxiv.org/abs/1409.0473. Y. Bengio, P. Simard, and P. Frasconi. 1994. Learning long-term dependencies with gradient descent is difficult. Trans. Neur. Netw. 5(2):157–166. Denny Britz, Anna Goldie, Minh-Thang Luong, and Quoc Le. 2017. Massive exploration of neural machine translation architectures. In Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing. Association for Computational Linguistics, Copenhagen, Denmark, pages 1442–1451. https://www.aclweb.org/anthology/D17-1151. Hugh Chen, Scott Lundberg, and Su-In Lee. 2017. Checkpoint ensembles: Ensemble methods from a single training process. CoRR abs/1710.03282. http://arxiv.org/abs/1710.03282. Jianmin Chen, Rajat Monga, Samy Bengio, and Rafal J´ozefowicz. 2016. Revisiting distributed synchronous SGD. CoRR abs/1604.00981. http://arxiv.org/abs/1604.00981. Kyunghyun Cho, Bart van Merrienboer, C¸ aglar G¨ulc¸ehre, Fethi Bougares, Holger Schwenk, and Yoshua Bengio. 2014. Learning phrase representations using RNN encoder-decoder for statistical machine translation. In Conference on Empirical Methods in Natural Language Processing. http://arxiv.org/abs/1406.1078. Jan Chorowski and Navdeep Jaitly. 2016. Towards better decoding and language model integration in sequence to sequence models. CoRR abs/1612.02695. http://arxiv.org/abs/1612.02695. Josep Maria Crego, Jungi Kim, Guillaume Klein, Anabel Rebollo, Kathy Yang, Jean Senellart, Egor Akhanov, Patrice Brunelle, Aurelien Coquard, Yongchao Deng, Satoshi Enoue, Chiyo Geiss, Joshua Johanson, Ardas Khalsa, Raoum Khiari, Byeongil Ko, Catherine Kobus, Jean Lorieux, Leidiana Martins, Dang-Chuan Nguyen, Alexandra Priori, Thomas Riccardi, Natalia Segal, Christophe Servan, Cyril Tiquet, Bo Wang, Jin Yang, Dakun Zhang, Jing Zhou, and Peter Zoldan. 2016. Systran’s pure neural machine translation systems. CoRR abs/1610.05540. http://arxiv.org/abs/1610.05540. Yann N. Dauphin, Angela Fan, Michael Auli, and David Grangier. 2016. Language modeling with gated convolutional networks. CoRR abs/1612.08083. http://arxiv.org/abs/1612.08083. Jeffrey Dean, Greg S. Corrado, Rajat Monga, Kai Chen, Matthieu Devin, Quoc V. Le, Mark Z. Mao, MarcAurelio Ranzato, Andrew Senior, Paul Tucker, Ke Yang, and Andrew Y. Ng. 2012. Large scale distributed deep networks. In NIPS. Michael Denkowski and Graham Neubig. 2017. Stronger baselines for trustable results in neural machine translation. In Proceedings of the First Workshop on Neural Machine Translation. Association for Computational Linguistics, Vancouver, pages 18–27. http://www.aclweb.org/anthology/W173203. Jacob Devlin. 2017. Sharp models on dull hardware: Fast and accurate neural machine translation decoding on the cpu. In Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing. Association for Computational Linguistics, pages 2820–2825. http://aclweb.org/anthology/D17-1300. Jeffrey L. Elman. 1990. Finding structure in time. Cognitive Science 14(2):179 – 211. Jonas Gehring, Michael Auli, David Grangier, Denis Yarats, and Yann N. Dauphin. 2017. Convolutional sequence to sequence learning. CoRR abs/1705.03122. http://arxiv.org/abs/1705.03122. Felix A Gers, J¨urgen Schmidhuber, and Fred Cummins. 2000. Learning to forget: Continual prediction with LSTM. Neural computation 12(10):2451–2471. Priya Goyal, Piotr Doll´ar, Ross B. Girshick, Pieter Noordhuis, Lukasz Wesolowski, Aapo Kyrola, Andrew Tulloch, Yangqing Jia, and Kaiming He. 85 2017. Accurate, large minibatch SGD: training imagenet in 1 hour. CoRR abs/1706.02677. http://arxiv.org/abs/1706.02677. Alex Graves and J¨urgen Schmidhuber. 2005. Framewise phoneme classification with bidirectional lstm and other neural network architectures. Neural Networks 18(5):602 – 610. IJCNN 2005. Edward Grefenstette, Karl Moritz Hermann, Mustafa Suleyman, and Phil Blunsom. 2015. Learning to transduce with unbounded memory. In Proceedings of the 28th International Conference on Neural Information Processing Systems Volume 2. MIT Press, Cambridge, MA, USA, NIPS’15, pages 1828–1836. http://dl.acm.org/citation.cfm?id=2969442.2969444. Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. 2015. Deep residual learning for image recognition. CoRR abs/1512.03385. http://arxiv.org/abs/1512.03385. Sepp Hochreiter. 1991. Untersuchungen zu dynamischen neuronalen netzen. Diploma, Technische Universit¨at M¨unchen 91:1. Sepp Hochreiter, Yoshua Bengio, Paolo Frasconi, and Jrgen Schmidhuber. 2001. Gradient flow in recurrent nets: the difficulty of learning long-term dependencies. Sepp Hochreiter and J¨urgen Schmidhuber. 1997. Long short-term memory. Neural computation 9(8):1735–1780. Kurt Hornik, Maxwell Stinchcombe, and Halbert White. 1989. Multilayer feedforward networks are universal approximators. Neural Networks 2(5):359 – 366. S´ebastien Jean, Kyunghyun Cho, Roland Memisevic, and Yoshua Bengio. 2015. On using very large target vocabulary for neural machine translation. In Proceedings of the 53rd Annual Meeting of the Association for Computational Linguistics and the 7th International Joint Conference on Natural Language Processing. Norman P. Jouppi, Cliff Young, Nishant Patil, David Patterson, Gaurav Agrawal, and et al. 2017. In-datacenter performance analysis of a tensor processing unit. CoRR abs/1704.04760. http://arxiv.org/abs/1704.04760. Marcin Junczys-Dowmunt, Tomasz Dwojak, and Rico Sennrich. 2016. The amu-uedin submission to the wmt16 news translation task: Attention-based nmt models as feature functions in phrase-based smt. arXiv preprint arXiv:1605.04809 . Nal Kalchbrenner and Phil Blunsom. 2013. Recurrent continuous translation models. In Conference on Empirical Methods in Natural Language Processing. Nal Kalchbrenner, Lasse Espeholt, Karen Simonyan, A¨aron van den Oord, Alex Graves, and Koray Kavukcuoglu. 2016. Neural machine translation in linear time. CoRR abs/1610.10099. http://arxiv.org/abs/1610.10099. Diederik P. Kingma and Jimmy Ba. 2014. Adam: A method for stochastic optimization. CoRR abs/1412.6980. http://arxiv.org/abs/1412.6980. Yann LeCun and Yoshua Bengio. 1998. The handbook of brain theory and neural networks. MIT Press, Cambridge, MA, USA, chapter Convolutional Networks for Images, Speech, and Time Series, pages 255–258. http://dl.acm.org/citation.cfm?id=303568.303704. Ankur P. Parikh, Oscar T¨ackstr¨om, Dipanjan Das, and Jakob Uszkoreit. 2016. A decomposable attention model for natural language inference. In EMNLP. Razvan Pascanu, C¸ aglar G¨ulc¸ehre, Kyunghyun Cho, and Yoshua Bengio. 2013. How to construct deep recurrent neural networks. CoRR abs/1312.6026. http://arxiv.org/abs/1312.6026. Tim Salimans and Diederik P. Kingma. 2016. Weight normalization: A simple reparameterization to accelerate training of deep neural networks. CoRR abs/1602.07868. http://arxiv.org/abs/1602.07868. M. Schuster and K.K. Paliwal. 1997. Bidirectional recurrent neural networks. IEEE Transactions on Signal Processing 45(11):2673–2681. Mike Schuster and Kaisuke Nakajima. 2012. Japanese and Korean voice search. 2012 IEEE International Conference on Acoustics, Speech and Signal Processing . H.T. Siegelmann and E.D. Sontag. 1995. On the computational power of neural nets. Journal of Computer and System Sciences 50(1):132 – 150. Rupesh Kumar Srivastava, Klaus Greff, and J¨urgen Schmidhuber. 2015. Highway networks. CoRR abs/1505.00387. Ilya Sutskever, Oriol Vinyals, and Quoc V. Le. 2014. Sequence to sequence learning with neural networks. In Advances in Neural Information Processing Systems. pages 3104–3112. Christian Szegedy, Vincent Vanhoucke, Sergey Ioffe, Jonathon Shlens, and Zbigniew Wojna. 2015. Rethinking the inception architecture for computer vision. CoRR abs/1512.00567. http://arxiv.org/abs/1512.00567. Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N. Gomez, Lukasz Kaiser, and Illia Polosukhin. 2017. Attention is all you need. CoRR abs/1706.03762. http://arxiv.org/abs/1706.03762. 86 Yonghui Wu, Mike Schuster, Zhifeng Chen, Quoc V. Le, Mohammad Norouzi, Wolfgang Macherey, Maxim Krikun, Yuan Cao, Qin Gao, Klaus Macherey, Jeff Klingner, Apurva Shah, Melvin Johnson, Xiaobing Liu, Lukasz Kaiser, Stephan Gouws, Yoshikiyo Kato, Taku Kudo, Hideto Kazawa, Keith Stevens, George Kurian, Nishant Patil, Wei Wang, Cliff Young, Jason Smith, Jason Riesa, Alex Rudnick, Oriol Vinyals, Greg Corrado, Macduff Hughes, and Jeffrey Dean. 2016. Google’s neural machine translation system: Bridging the gap between human and machine translation. CoRR abs/1609.08144. http://arxiv.org/abs/1609.08144. Jie Zhou, Ying Cao, Xuguang Wang, Peng Li, and Wei Xu. 2016. Deep recurrent models with fast-forward connections for neural machine translation. CoRR abs/1606.04199. http://arxiv.org/abs/1606.04199.
2018
8
Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Long Papers), pages 866–876 Melbourne, Australia, July 15 - 20, 2018. c⃝2018 Association for Computational Linguistics 866 Style Transfer Through Back-Translation Shrimai Prabhumoye, Yulia Tsvetkov, Ruslan Salakhutdinov, Alan W Black Carnegie Mellon University, Pittsburgh, PA, USA {sprabhum,ytsvetko,rsalakhu,awb}@cs.cmu.edu Abstract Style transfer is the task of rephrasing the text to contain specific stylistic properties without changing the intent or affect within the context. This paper introduces a new method for automatic style transfer. We first learn a latent representation of the input sentence which is grounded in a language translation model in order to better preserve the meaning of the sentence while reducing stylistic properties. Then adversarial generation techniques are used to make the output match the desired style. We evaluate this technique on three different style transformations: sentiment, gender and political slant. Compared to two state-of-the-art style transfer modeling techniques we show improvements both in automatic evaluation of style transfer and in manual evaluation of meaning preservation and fluency. 1 Introduction Intelligent, situation-aware applications must produce naturalistic outputs, lexicalizing the same meaning differently, depending upon the environment. This is particularly relevant for language generation tasks such as machine translation (Sutskever et al., 2014; Bahdanau et al., 2015), caption generation (Karpathy and Fei-Fei, 2015; Xu et al., 2015), and natural language generation (Wen et al., 2017; Kiddon et al., 2016). In conversational agents (Ritter et al., 2011; Sordoni et al., 2015; Vinyals and Le, 2015; Li et al., 2016), for example, modulating the politeness style, to sound natural depending upon a situation: at a party with friends “Shut up! the video is starting!”, or in a professional setting “Please be quiet, the video will begin shortly.”. These goals have motivated a considerable amount of recent research efforts focused at “controlled” language generation—aiming at separating the semantic content of what is said from the stylistic dimensions of how it is said. These include approaches relying on heuristic substitutions, deletions, and insertions to modulate demographic properties of a writer (Reddy and Knight, 2016), integrating stylistic and demographic speaker traits in statistical machine translation (Rabinovich et al., 2016; Niu et al., 2017), and deep generative models controlling for a particular stylistic aspect, e.g., politeness (Sennrich et al., 2016), sentiment, or tense (Hu et al., 2017; Shen et al., 2017). The latter approaches to style transfer, while more powerful and flexible than heuristic methods, have yet to show that in addition to transferring style they effectively preserve meaning of input sentences. This paper introduces a novel approach to transferring style of a sentence while better preserving its meaning. We hypothesize—relying on the study of Rabinovich et al. (2016) who showed that author characteristics are significantly obfuscated by both manual and automatic machine translation—that grounding in back-translation is a plausible approach to rephrase a sentence while reducing its stylistic properties. We thus first use back-translation to rephrase the sentence and reduce the effect of the original style; then, we generate from the latent representation, using separate style-specific generators controlling for style (§2). We focus on transferring author attributes: (1) gender and (2) political slant, and (3) on sentiment modification. The second task is novel: given a sentence by an author with a particular political leaning, rephrase the sentence to preserve its meaning but to confound classifiers of political slant (§3). The task of sentiment modification enables us to compare our approach with state-of867 Figure 1: Style transfer pipeline: to rephrase a sentence and reduce its stylistic characteristics, the sentence is back-translated. Then, separate style-specific generators are used for style transfer. the-art models (Hu et al., 2017; Shen et al., 2017). Style transfer is evaluated using style classifiers trained on held-out data. Our back-translation style transfer model outperforms the state-of-theart baselines (Shen et al., 2017; Hu et al., 2017) on the tasks of political slant and sentiment modification; 12% absolute improvement was attained for political slant transfer, and up to 7% absolute improvement in modification of sentiment (§5). Meaning preservation was evaluated manually, using A/B testing (§4). Our approach performs better than the baseline on the task of transferring gender and political slant. Finally, we evaluate the fluency of the generated sentences using human evaluation and our model outperforms the baseline in all experiments for fluency. The main contribution of this work is a new approach to style transfer that outperforms stateof-the-art baselines in both the quality of input– output correspondence (meaning preservation and fluency), and the accuracy of style transfer. The secondary contribution is a new task that we propose to evaluate style transfer: transferring political slant. 2 Methodology Given two datasets X1 = {x(1) 1 , . . . , x(n) 1 } and X2 = {x(1) 2 , . . . , x(n) 2 } which represent two different styles s1 and s2, respectively, our task is to generate sentences of the desired style while preserving the meaning of the input sentence. Specifically, we generate samples of dataset X1 such that they belong to style s2 and samples of X2 such that they belong to style s1. We denote the output of dataset X1 transfered to style s2 as ˆ X1 = {ˆx(1) 2 , . . . , ˆx(n) 2 } and the output of dataset X2 transferred to style s1 as ˆ X2 = {ˆx(1) 1 , . . . , ˆx(n) 1 }. Hu et al. (2017) and Shen et al. (2017) introduced state-of-the-art style transfer models that use variational auto-encoders (Kingma and Welling, 2014, VAEs) and cross-aligned autoencoders, respectively, to model a latent content variable z. The latent content variable z is a code which is not observed. The generative model conditions on this code during the generation process. Our aim is to design a latent code z which (1) represents the meaning of the input sentence grounded in back-translation and (2) weakens the style attributes of author’s traits. To model the former, we use neural machine translation. Prior work has shown that the process of translating a sentence from a source language to a target language retains the meaning of the sentence but does not preserve the stylistic features related to the author’s traits (Rabinovich et al., 2016). We hypothesize that a latent code z obtained through backtranslation will normalize the sentence and devoid it from style attributes specific to author’s traits. Figure 1 shows the overview of the proposed method. In our framework, we first train a machine translation model from source language e to a target language f. We also train a backtranslation model from f to e. Let us assume our styles s1 and s2 correspond to DEMOCRATIC and REPUBLICAN style, respectively. In Figure 1, the input sentence i thank you, rep. visclosky. is labeled as DEMOCRATIC. We translate the sentence using the e →f machine translation model and generate the parallel sentence in the target language f: je vous remercie, rep. visclosky. Using the fixed encoder of the f →e machine translation model, we encode this sentence in language f. The hidden representation created by this encoder of the back-translation model is used as z. We condition our generative models on this z. We then train two separate decoders for each style s1 and s2 to generate samples in these respective styles in source language e. Hence the sentence could be translated to the REPUBLICAN style using the decoder for s2. For example, the sentence i’m praying for you sir. is the REPUBLICAN ver868 Figure 2: The latent representation from back-translation and the style classifier feedback are used to guide the style-specific generators. sion of the input sentence and i thank you, senator visclosky. is the more DEMOCRATIC version of it. Note that in this setting, the machine translation and the encoder of the back-translation model remain fixed. They are not dependent on the data we use across different tasks. This facilitates reusability and spares the need of learning separate models to generate z for a new style data. 2.1 Meaning-Grounded Representation In this section we describe how we learn the latent content variable z using back-translation. The e →f machine translation and f →e backtranslation models are trained using a sequence-tosequence framework (Sutskever et al., 2014; Bahdanau et al., 2015) with style-agnostic corpus. The style-specific sentence i thank you, rep. visclosky. in source language e is translated to the target language f to get je vous remercie, rep. visclosky. The individual tokens of this sentence are then encoded using the encoder of the f →e backtranslation model. The learned hidden representation is z. Formally, let θE represent the parameters of the encoder of f →e translation system. Then z is given by: z = Encoder(xf; θE) (1) where, xf is the sentence x in language f. Specifically, xf is the output of e →f translation system when xe is given as input. Since z is derived from a non-style specific process, this Encoder is not style specific. 2.2 Style-Specific Generation Figure 2 shows the architecture of the generative model for generating different styles. Using the encoder embedding z, we train multiple decoders for each style. The sentence generated by a decoder is passed through the classifier. The loss of the classifier for the generated sentence is used as feedback to guide the decoder for the generation process. The target attribute of the classifier is determined by the decoder from which the output is generated. For example, in the case of DEMOCRATIC decoder, the target attribute is DEMOCRATIC and for the REPUBLICAN decoder the target is REPUBLICAN. 2.2.1 Style Classifiers We train a convolutional neural network (CNN) classifier to accurately predict the given style. We also use it to evaluate the error in the generated samples for the desired style. We train the classifier in a supervised manner. The classifier accepts either discrete or continuous tokens as inputs. This is done such that the generator output can be used as input to the classifier. We need labeled examples to train the classifier such that each instance in the dataset X should have a label in the set s = {s1, s2}. Let θC denote the parameters of the classifier. The objective to train the classifier is given by: Lclass(θC) = EX[log qC(s|x)]. (2) To improve the accuracy of the classifier, we augment classifier’s inputs with style-specific lexicons. We concatenate binary style indicators to each input word embedding in the classifier. The indicators are set to 1 if the input word is present in a style-specific lexicon; otherwise they are set to 0. Style lexicons are extracted using the log-odds ratio informative Dirichlet prior (Monroe et al., 2008), a method that identifies words that are statistically overrepresented in each of the categories. 869 2.2.2 Generator Learning We use a bidirectional LSTM to build our decoders which generate the sequence of tokens ˆx = {x1, · · · xT }. The sequence ˆx is conditioned on the latent code z (in our case, on the machine translation model). In this work we use a corpus translated to French by the machine translation system as the input to the encoder of the backtranslation model. The same encoder is used to encode sentences of both styles. The representation created by this encoder is given by Eq 1. Samples are generated as follows: ˆx ∼z = p(ˆx|z) (3) = Y t p(ˆxt|ˆx<t, z) (4) where, ˆx<t are the tokens generated before ˆxt. Tokens are discrete and non-differentiable. This makes it difficult to use a classifier, as the generation process samples discrete tokens from the multinomial distribution parametrized using softmax function at each time step t. This nondifferentiability, in turn, breaks down gradient propagation from the discriminators to the generator. Instead, following Hu et al. (2017) we use a continuous approximation based on softmax, along with the temperature parameter which anneals the softmax to the discrete case as training proceeds. To create a continuous representation of the output of the generative model which will be given as an input to the classifier, we use: ˆxt ∼softmax(ot/τ), where, ot is the output of the generator and τ is the temperature which decreases as the training proceeds. Let θG denote the parameters of the generators. Then the reconstruction loss is calculated using the cross entropy function, given by: Lrecon(θG; x) = EqE(z|x)[log pgen(x|z)] (5) Here, the back-translation encoder E creates the latent code z by: z = E(x) = qE(z|x) (6) The generative loss Lgen is then given by: minθgenLgen = Lrecon + λcLclass (7) where Lrecon is given by Eq. (5), Lclass is given by Eq (2) and λc is a balancing parameter. We also use global attention of (Luong et al., 2015) to aid our generators. At each time step t of the generation process, we infer a variable length alignment vector at: at = exp(score(ht, ¯hs)) P s′ exp(score(ht, ¯hs′) (8) score(ht, ¯hs) = dot(hT t , ¯hs), (9) where ht is the current target state and ¯hs are all source states. While generating sentences, we use the attention vector to replace unknown characters (UNK) using the copy mechanism in (See et al., 2017). 3 Style Transfer Tasks Much work in computational social science has shown that people’s personal and demographic characteristics—either publicly observable (e.g., age, gender) or private (e.g., religion, political affiliation)—are revealed in their linguistic choices (Nguyen et al., 2016). There are practical scenarios, however, when these attributes need to be modulated or obfuscated. For example, some users may wish to preserve their anonymity online, for personal security concerns (Jardine, 2016), or to reduce stereotype threat (Spencer et al., 1999). Modulating authors’ attributes while preserving meaning of sentences can also help generate demographically-balanced training data for a variety of downstream applications. Moreover, prior work has shown that the quality of language identification and POS tagging degrades significantly on African American Vernacular English (Blodgett et al., 2016; Jørgensen et al., 2015); YouTube’s automatic captions have higher error rates for women and speakers from Scotland (Rudinger et al., 2017). Synthesizing balanced training data—using style transfer techniques—is a plausible way to alleviate bias present in existing NLP technologies. We thus focus on two tasks that have practical and social-good applications, and also accurate style classifiers. To position our method with respect to prior work, we employ a third task of sentiment transfer, which was used in two stateof-the-art approaches to style transfer (Hu et al., 2017; Shen et al., 2017). We describe the three tasks and associated dataset statistics below. The methodology that we advocate is general and can be applied to other styles, for transferring various 870 social categories, types of bias, and in multi-class settings. Gender. In sociolinguistics, gender is known to be one of the most important social categories driving language choice (Eckert and McConnellGinet, 2003; Lakoff and Bucholtz, 2004; Coates, 2015). Reddy and Knight (2016) proposed a heuristic-based method to obfuscate gender of a writer. This method uses statistical association measures to identify gender-salient words and substitute them with synonyms typically of the opposite gender. This simple approach produces highly fluent, meaning-preserving sentences, but does not allow for more general rephrasing of sentence beyond single-word substitutions. In our work, we adopt this task of transferring the author’s gender and adapt it to our experimental settings. We used Reddy and Knight’s (2016) dataset of reviews from Yelp annotated for two genders corresponding to markers of sex.1 We split the reviews to sentences, preserving the original gender labels. To keep only sentences that are strongly indicative of a gender, we then filtered out genderneutral sentences (e.g., thank you) and sentences whose likelihood to be written by authors of one gender is lower than 0.7.2 Political slant. Our second dataset is comprised of top-level comments on Facebook posts from all 412 current members of the United States Senate and House who have public Facebook pages (Voigt et al., 2018).3 Only top-level comments that directly respond to the post are included. Every comment to a Congressperson is labeled with the Congressperson’s party affiliation: democratic or republican. Topic and sentiment in these comments reveal commenter’s political slant. For example, defund them all, especially when it comes to the illegal immigrants . and thank u james, praying for all the work u do . are republican, whereas on behalf of the hard-working nh public school teachers- thank you ! and we need more strong voices like yours fighting for gun control . 1We note that gender may be considered along a spectrum (Eckert and McConnell-Ginet, 2003), but use gender as a binary variable due to the absence of corpora with continuous-valued gender annotations. 2We did not experiment with other threshold values. 3The posts and comments are all public; however, to protect the identity of Facebook users in this dataset Voigt et al. (2018) have removed all identifying user information as well as Facebook-internal information such as User IDs and Post IDs, replacing these with randomized ID numbers. Style class train dev test gender 2.57M 2.67M 4.5K 535K political 80K 540K 4K 56K sentiment 2M 444K 63.5K 127K Table 1: Sentence count in style-specific corpora. represent examples of democratic sentences. Our task is to preserve intent of the commenter (e.g., to thank their representative), but to modify their observable political affiliation, as in the example in Figure 1. We preprocessed and filtered the comments similarly to the gender-annotated corpus above. Sentiment. To compare our work with the stateof-the-art approaches of style transfer for nonparallel corpus we perform sentiment transfer, replicating the models and experimental setups of Hu et al. (2017) and Shen et al. (2017). Given a positive Yelp review, a style transfer model will generate a similar review but with an opposite sentiment. We used Shen et al.’s (2017) corpus of reviews from Yelp. They have followed the standard practice of labeling the reviews with rating of higher than three as positive and less than three as negative. They have also split the reviews to sentences and assumed that the sentence has the same sentiment as the review. Dataset statistics. We summarize below corpora statistics for the three tasks: transferring gender, political slant, and sentiment. The dataset for sentiment modification task was used as described in (Shen et al., 2017). We split Yelp and Facebook corpora into four disjoint parts each: (1) a training corpus for training a style classifier (class); (2) a training corpus (train) used for training the stylespecific generative model described in §2.2; (3) development and (4) test sets. We have removed from training corpora class and train all sentences that overlap with development and test corpora. Corpora sizes are shown in Table 1. Table 2 shows the approximate vocabulary sizes used for each dataset. The vocabulary is the same for both the styles in each experiment. Style gender political sentiment Vocabulary 20K 20K 10K Table 2: Vocabulary sizes of the datasets. Table 3 summarizes sentence statistics. All the 871 sentences have maximum length of 50 tokens. Style Avg. Length %data male 18.08 50.00 female 18.21 50.00 republican 16.18 50.00 democratic 16.01 50.00 negative 9.66 39.81 positive 8.45 60.19 Table 3: Average sentence length and class distribution of style corpora. 4 Experimental Setup In what follows, we describe our experimental settings, including baselines used, hyperparameter settings, datasets, and evaluation setups. Baseline. We compare our model against the “cross-aligned” auto-encoder (Shen et al., 2017), which uses style-specific decoders to align the style of generated sentences to the actual distribution of the style. We used the off-the-shelf sentiment model released by Shen et al. (2017) for the sentiment experiments. We also separately train this model for the gender and political slant using hyper-parameters detailed below.4 Translation data. We trained an English– French neural machine translation system and a French–English back-translation system. We used data from Workshop in Statistical Machine Translation 2015 (WMT15) (Bojar et al., 2015) to train our translation models. We used the French– English data from the Europarl v7 corpus, the news commentary v10 corpus and the common crawl corpus from WMT15. Data were tokenized using the Moses tokenizer (Koehn et al., 2007). Approximately 5.4M English–French parallel sentences were used for training. A vocabulary size of 100K was used to train the translation systems. Hyperparameter settings. In all the experiments, the generator and the encoders are a twolayer bidirectional LSTM with an input size of 300 and the hidden dimension of 500. The generator 4In addition, we compared our model with the current state-of-the-art approach introduced by Hu et al. (2017); Shen et al. (2017) use this method as baseline, obtaining comparable results. We reproduced the results reported in (Hu et al., 2017) using their tasks and data. However, the same model trained on our political slant datasets (described in §3), obtained an almost random accuracy of 50.98% in style transfer. We thus omit these results. samples a sentence of maximum length 50. All the generators use global attention vectors of size 500. The CNN classifier is trained with 100 filters of size 5, with max-pooling. The input to CNN is of size 302: the 300-dimensional word embedding plus two bits for membership of the word in our style lexicons, as described in §2.2.1. Balancing parameter λc is set to 15. For sentiment task, we have used settings provided in (Shen et al., 2017). 5 Results We evaluate our approach along three dimensions. (1) Style transfer accuracy, measuring the proportion of our models’ outputs that generate sentences of the desired style. The style transfer accuracy is performed using classifiers trained on held-out train data that were not used in training the style transfer models. (2) Preservation of meaning. (3) Fluency, measuring the readability and the naturalness of the generated sentences. We conducted human evaluations for the latter two. In what follows, we first present the quality of our neural machine translation systems, then we present the evaluation setups, and then present the results of our experiments. Translation quality. The BLEU scores achieved for English–French MT system is 32.52 and for French–English MT system is 31.11; these are strong translation systems. We deliberately chose a European language close to English for which massive amounts of parallel data are available and translation quality is high, to concentrate on the style generation, rather than improving a translation system. 5 5.1 Style Transfer Accuracy We measure the accuracy of style transfer for the generated sentences using a pre-trained style classifier (§2.2.1). The classifier is trained on data that is not used for training our style transfer generative models (as described in §3). The classifier has an accuracy of 82% for the gender-annotated corpus, 92% accuracy for the political slant dataset and 93.23% accuracy for the sentiment dataset. 5Alternatively, we could use a pivot language that is typologically more distant from English, e.g., Chinese. In this case we hypothesize that stylistic traits would be even less preserved in translation, but the quality of back-translated sentences would be worse. We have not yet investigated how the accuracy of the translation model, nor the language of translation affects our models. 872 We transfer the style of test sentences and then test the classification accuracy of the generated sentences for the opposite label. For example, if we want to transfer the style of male Yelp reviews to female, then we use the fixed common encoder of the back-translation model to encode the test male sentences and then we use the female generative model to generate the female-styled reviews. We then test these generated sentences for the female label using the gender classifier. Experiment CAE BST Gender 60.40 57.04 Political slant 75.82 88.01 Sentiment 80.43 87.22 Table 4: Accuracy of the style transfer in generated sentences. In Table 4, we detail the accuracy of each classifier on generated style-transfered sentences.6 We denote the Shen et al.’s (2017) Cross-aligned Auto-Encoder model as CAE and our model as Back-translation for Style Transfer (BST). On two out of three tasks our model substantially outperforms the baseline, by up to 12% in political slant transfer, and by up to 7% in sentiment modification. 5.2 Preservation of Meaning Although we attempted to use automatics measures to evaluate how well meaning is preserved in our transformations; measures such as BLEU (Papineni et al., 2002) and Meteor (Denkowski and Lavie, 2011), or even cosine similarity between distributed representations of sentences do not capture this distance well. Meaning preservation in style transfer is not trivial to define as literal meaning is likely to change when style transfer occurs. For example “My girlfriend loved the desserts” vs “My partner liked the desserts”. Thus we must relax the condition of literal meaning to intent or affect of the utterance within the context of the discourse. Thus if the intent is to criticize a restaurant’s service in a review, changing “salad” to “chicken” could still have the same effect but if the intent is to order food that substitution would not be acceptable. Ideally we wish to evaluate transfer within some 6In each experiment, we report aggregated results across directions of style transfer; same results broke-down to style categories are listed in the Supplementary Material. Experiment CAE No Pref. BST Gender 15.23 41.36 43.41 Political slant 14.55 45.90 39.55 Sentiment 35.91 40.91 23.18 Table 5: Human preference for meaning preservation in percentages. downstream task and ensure that the task has the same outcome even after style transfer. This is a hard evaluation and hence we resort to a simpler evaluation of the “meaning” of the sentence. We set up a manual pairwise comparison following Bennett (2005). The test presents the original sentence and then, in random order, its corresponding sentences produced by the baseline and our models. For the gender style transfer we asked “Which transferred sentence maintains the same sentiment of the source sentence in the same semantic context (i.e. you can ignore if food items are changed)”. For the task of changing the political slant, we asked “Which transferred sentence maintains the same semantic intent of the source sentence while changing the political position”. For the task of sentiment transfer we have followed the annotation instruction in (Shen et al., 2017) and asked “Which transferred sentence is semantically equivalent to the source sentence with an opposite sentiment” We then count the preferences of the eleven participants, measuring the relative acceptance of the generated sentences.7 A third option “=” was given to participants to mark no preference for either of the generated sentence. The “no preference” option includes choices both are equally bad and both are equally good. We conducted three tests one for each type of experiment - gender, political slant and sentiment. We also divided our annotation set into short (#tokens ≤15) and long (15 < #tokens ≤30) sentences for the gender and the political slant experiment. In each set we had 20 random samples for each type of style transfer. In total we had 100 sentences to be annotated. Note that we did not ask about appropriateness of the style transfer in this test, or fluency of outputs, only about meaning preservation. The results of human evaluation are presented in Table 5. Although a no-preference option was chosen often—showing that state-ofthe-art systems are still not on par with hu7None of the human judges are authors of this paper 873 man expectations—the BST models outperform the baselines in the gender and the political slant transfer tasks. Crucially, the BST models significantly outperform the CAE models when transferring style in longer and harder sentences. Annotators preferred the CAE model only for 12.5% of the long sentences, compared to 47.27% preference for the BST model. 5.3 Fluency Finally, we evaluate the fluency of the generated sentences. Fluency was rated from 1 (unreadable) to 4 (perfect) as is described in (Shen et al., 2017). We randomly selected 60 sentences each generated by the baseline and the BST model. The results shown in Table 6 are averaged scores for each model. Experiment CAE BST Gender 2.42 2.81 Political slant 2.79 2.87 Sentiment 3.09 3.18 Overall 2.70 2.91 Overall Short 3.05 3.11 Overall Long 2.18 2.62 Table 6: Fluency of the generated sentences. BST outperforms the baseline overall. It is interesting to note that BST generates significantly more fluent longer sentences than the baseline model. Since the average length of sentences was higher for the gender experiment, BST notably outperformed the baseline in this task, relatively to the sentiment task where the sentences are shorter. Examples of the original and style-transfered sentences generated by the baseline and our model are shown in the Supplementary Material. 5.4 Discussion The loss function of the generators given in Eq. 5 includes two competing terms, one to improve meaning preservation and the other to improve the style transfer accuracy. In the task of sentiment modification, the BST model preserved meaning worse than the baseline, on the expense of being better at style transfer. We note, however, that the sentiment modification task is not particularly well-suited for evaluating style transfer: it is particularly hard (if not impossible) to disentangle the sentiment of a sentence from its propositional content, and to modify sentiment while preserving meaning or intent. On the other hand, the style-transfer accuracy for gender is lower for BST model but the preservation of meaning is much better for the BST model, compared to CAE model and to ”No preference” option. This means that the BST model does better job at closely representing the input sentence while taking a mild hit in the style transfer accuracy. 6 Related Work Style transfer with non-parallel text corpus has become an active research area due to the recent advances in text generation tasks. Hu et al. (2017) use variational auto-encoders with a discriminator to generate sentences with controllable attributes. The method learns a disentangled latent representation and generates a sentence from it using a code. This paper mainly focuses on sentiment and tense for style transfer attributes. It evaluates the transfer strength of the generated sentences but does not evaluate the extent of preservation of meaning in the generated sentences. In our work, we show a qualitative evaluation of meaning preservation. Shen et al. (2017) first present a theoretical analysis of style transfer in text using non-parallel corpus. The paper then proposes a novel crossalignment auto-encoders with discriminators architecture to generate sentences. It mainly focuses on sentiment and word decipherment for style transfer experiments. Fu et al. (2018) explore two models for style transfer. The first approach uses multiple decoders for each type of style. In the second approach, style embeddings are used to augment the encoded representations, so that only one decoder needs to be learned to generate outputs in different styles. Style transfer is evaluated on scientific paper titles and newspaper tiles, and sentiment in reviews. This method is different from ours in that we use machine translation to create a strong latent state from which multiple decoders can be trained for each style. We also propose a different human evaluation scheme. Li et al. (2018) first extract words or phrases associated with the original style of the sentence, delete them from the original sentence and then replace them with new phrases associated with the target style. They then use a neural model to fluently combine these into a final output. Junbo 874 et al. (2017) learn a representation which is styleagnostic, using adversarial training of the autoencoder. Our work is also closely-related to a problem of paraphrase generation (Madnani and Dorr, 2010; Dong et al., 2017), including methods relying on (phrase-based) back-translation (Ganitkevitch et al., 2011; Ganitkevitch and Callison-Burch, 2014). More recently, Mallinson et al. (2017) and Wieting et al. (2017) showed how neural backtranslation can be used to generate paraphrases. An additional related line of research is machine translation with non-parallel data. Lample et al. (2018) and Artetxe et al. (2018) have proposed sophisticated methods for unsupervised machine translation. These methods could in principle be used for style transfer as well. 7 Conclusion We propose a novel approach to the task of style transfer with non-parallel text.8 We learn a latent content representation using machine translation techniques; this aids grounding the meaning of the sentences, as well as weakening the style attributes. We apply this technique to three different style transfer tasks. In transfer of political slant and sentiment we outperform an off-the-shelf state-of-the-art baseline using a cross-aligned autoencoder. The political slant task is a novel task that we introduce. Our model also outperforms the baseline in all the experiments of fluency, and in the experiments for meaning preservation in generated sentences of gender and political slant. Yet, we acknowledge that the generated sentences do not always adequately preserve meaning. This technique is suitable not just for style transfer, but for enforcing style, and removing style too. In future work we intend to apply this technique to debiasing sentences and anonymization of author traits such as gender and age. In the future work, we will also explore whether an enhanced back-translation by pivoting through several languages will learn better grounded latent meaning representations. In particular, it would be interesting to back-translate through multiple target languages with a single source language (Johnson et al., 2016). 8All the code and data used in the experiments will be released to facilitate reproducibility at https://github.com/shrimai/Style-Transfer-Through-BackTranslation Measuring the separation of style from content is hard, even for humans. It depends on the task and the context of the utterance within its discourse. Ultimately we must evaluate our style transfer within some down-stream task where our style transfer has its intended use but we achieve the same task completion criteria. Acknowledgments This work was funded by a fellowship from Robert Bosch, and in part by the National Science Foundation through award IIS-1526745. We would like to thank Sravana Reddy for sharing the Yelp corpus used in gender transfer experiments, Zhiting Hu for providing an implementation of a VAEbased baseline, and the 11 CMU graduate students who helped with annotation and manual evaluations. We are also grateful to the anonymous reviewers for their constructive feedback, and to Dan Jurafsky, David Jurgens, Vinod Prabhakaran, and Rob Voigt for valuable discussions at earlier stages of this work. References Mikel Artetxe, Gorka Labaka, Eneko Agirre, and Kyunghyun Cho. 2018. Unsupervised neural machine translation. In Proc ICLR. Dzmitry Bahdanau, Kyunghyun Cho, and Yoshua Bengio. 2015. Neural machine translation by jointly learning to align and translate. In Proc. ICLR. Christina L Bennett. 2005. Large scale evaluation of corpus-based synthesizers: Results and lessons from the blizzard challenge 2005. In Ninth European Conference on Speech Communication and Technology. Su Lin Blodgett, Lisa Green, and Brendan O’Connor. 2016. Demographic dialectal variation in social media: A case study of African-American English. In Proc. EMNLP. Ondˇrej Bojar, Rajen Chatterjee, Christian Federmann, Barry Haddow, Matthias Huck, Chris Hokamp, Philipp Koehn, Varvara Logacheva, Christof Monz, Matteo Negri, Matt Post, Carolina Scarton, Lucia Specia, and Marco Turchi. 2015. Findings of the 2015 workshop on statistical machine translation. In Proc. WMT, pages 1–46. Jennifer Coates. 2015. Women, men and language: A sociolinguistic account of gender differences in language. Routledge. Michael Denkowski and Alon Lavie. 2011. Meteor 1.3: Automatic metric for reliable optimization and 875 evaluation of machine translation systems. In Proc. WMT, pages 85–91. Li Dong, Jonathan Mallinson, Siva Reddy, and Mirella Lapata. 2017. Learning to paraphrase for question answering. In Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing, pages 875–886. Association for Computational Linguistics. Penelope Eckert and Sally McConnell-Ginet. 2003. Language and gender. Cambridge University Press. Zhenxin Fu, Xiaoye Tan, Nanyun Peng, Dongyan Zhao, and Rui Yan. 2018. Style Transfer in Text: Exploration and Evaluation. In Proc. AAAI. Juri Ganitkevitch and Chris Callison-Burch. 2014. The multilingual paraphrase database. In Proc. LREC, pages 4276–4283. Juri Ganitkevitch, Chris Callison-Burch, Courtney Napoles, and Benjamin Van Durme. 2011. Learning sentential paraphrases from bilingual parallel corpora for text-to-text generation. In Proc. EMNLP, pages 1168–1179. Zhiting Hu, Zichao Yang, Xiaodan Liang, Ruslan Salakhutdinov, and Eric P Xing. 2017. Toward controlled generation of text. In Proc. ICML, pages 1587–1596. Eric Jardine. 2016. Tor, what is it good for? political repression and the use of online anonymity-granting technologies. New Media & Society. Melvin Johnson, Mike Schuster, Quoc V Le, Maxim Krikun, Yonghui Wu, Zhifeng Chen, Nikhil Thorat, Fernanda Vi´egas, Martin Wattenberg, Greg Corrado, et al. 2016. Google’s multilingual neural machine translation system: enabling zero-shot translation. arXiv preprint arXiv:1611.04558. Anna Jørgensen, Dirk Hovy, and Anders Søgaard. 2015. Challenges of studying and processing dialects in social media. In Proc. of the Workshop on Noisy User-generated Text, pages 9–18. Junbo, Zhao, Y. Kim, K. Zhang, A. M. Rush, and Y. LeCun. 2017. Adversarially Regularized Autoencoders for Generating Discrete Structures. ArXiv eprints. Andrej Karpathy and Li Fei-Fei. 2015. Deep visualsemantic alignments for generating image descriptions. In Proc. CVPR, pages 3128–3137. Chlo´e Kiddon, Luke Zettlemoyer, and Yejin Choi. 2016. Globally coherent text generation with neural checklist models. In Proc. EMNLP, pages 329–339. Diederik P Kingma and Max Welling. 2014. Autoencoding variational bayes. In Proc. ICLR. Philipp Koehn, Hieu Hoang, Alexandra Birch, Chris Callison-Burch, Marcello Federico, Nicola Bertoldi, Brooke Cowan, Wade Shen, Christine Moran, Richard Zens, et al. 2007. Moses: Open source toolkit for statistical machine translation. In Proc. ACL (demonstration sessions), pages 177–180. Robin Tolmach Lakoff and Mary Bucholtz. 2004. Language and woman’s place: Text and commentaries, volume 3. Oxford University Press, USA. Guillaume Lample, Alexis Conneau, Ludovic Denoyer, and Marc’Aurelio Ranzato. 2018. Unsupervised machine translation using monolingual corpora only. In Proc. ICLR. J. Li, R. Jia, H. He, and P. Liang. 2018. Delete, Retrieve, Generate: A Simple Approach to Sentiment and Style Transfer. ArXiv e-prints. Jiwei Li, Michel Galley, Chris Brockett, Georgios P Spithourakis, Jianfeng Gao, and Bill Dolan. 2016. A persona-based neural conversation model. In Proc. ACL. Minh-Thang Luong, Hieu Pham, and Christopher D. Manning. 2015. Effective approaches to attentionbased neural machine translation. In Proc. EMNLP. Nitin Madnani and Bonnie J Dorr. 2010. Generating phrasal and sentential paraphrases: A survey of data-driven methods. Computational Linguistics, 36(3):341–387. Jonathan Mallinson, Rico Sennrich, and Mirella Lapata. 2017. Paraphrasing revisited with neural machine translation. In Proce. EACL, volume 1, pages 881–893. Burt L. Monroe, Michael P. Colaresi, and Kevin M. Quinn. 2008. Fightin words: Lexical feature selection and evaluation for identifying the content of political conflict. Political Analysis. Dong Nguyen, A. Seza Do˘gru¨oz, Carolyn P. Ros´e, and Franciska de Jong. 2016. Computational sociolinguistics: A survey. Computational Linguistics, 42(3):537–593. Xing Niu, Marianna Martindale, and Marine Carpuat. 2017. A study of style in machine translation: Controlling the formality of machine translation output. In Proc. EMNLP, pages 2804–2809. Kishore Papineni, Salim Roukos, Todd Ward, and WeiJing Zhu. 2002. BLEU: a method for automatic evaluation of machine translation. In Proc. ACL, pages 311–318. Ella Rabinovich, Shachar Mirkin, Raj Nath Patel, Lucia Specia, and Shuly Wintner. 2016. Personalized machine translation: Preserving original author traits. In Proc. EACL. 876 Sravana Reddy and Kevin Knight. 2016. Obfuscating gender in social media writing. In Proc. of Workshop on Natural Language Processing and Computational Social Science, pages 17–26. Alan Ritter, Colin Cherry, and William B Dolan. 2011. Data-driven response generation in social media. In Proc. EMNLP, pages 583–593. Rachel Rudinger, Chandler May, and Benjamin Van Durme. 2017. Social bias in elicited natural language inferences. In Proc. of the First Workshop on Ethics in Natural Language Processing, page 74. Abigail See, Peter J Liu, and Christopher D Manning. 2017. Get to the point: Summarization with pointergenerator networks. In Proc. ACL. Rico Sennrich, Barry Haddow, and Alexandra Birch. 2016. Controlling politeness in neural machine translation via side constraints. In Proc. NAACL, pages 35–40. Tianxiao Shen, Tao Lei, Regina Barzilay, and Tommi Jaakkola. 2017. Style transfer from non-parallel text by cross-alignment. In Proc. NIPS. Alessandro Sordoni, Michel Galley, Michael Auli, Chris Brockett, Yangfeng Ji, Margaret Mitchell, Jian-Yun Nie, Jianfeng Gao, and Bill Dolan. 2015. A neural network approach to context-sensitive generation of conversational responses. In Proc. NAACL. Steven J. Spencer, Claude M. Steele, and Diane M. Quinn. 1999. Stereotype Threat and Women’s Math Performance. Journal of Experimental Social Psychology, 35:4–28. Ilya Sutskever, Oriol Vinyals, and Quoc V Le. 2014. Sequence to sequence learning with neural networks. In Proc. NIPS, pages 3104–3112. Oriol Vinyals and Quoc Le. 2015. A neural conversational model. In Proc. ICML Deep Learning Workshop. Rob Voigt, David Jurgens, Vinodkumar Prabhakaran, Dan Jurafsky, and Yulia Tsvetkov. 2018. RtGender: A corpus for studying differential responses to gender. In Proc. LREC. Tsung-Hsien Wen, David Vandyke, Nikola Mrksic, Milica Gasic, Lina M Rojas-Barahona, Pei-Hao Su, Stefan Ultes, and Steve Young. 2017. A networkbased end-to-end trainable task-oriented dialogue system. In Proc. EACL. John Wieting, Jonathan Mallinson, and Kevin Gimpel. 2017. Learning paraphrastic sentence embeddings from back-translated bitext. In Proc. EMNLP, pages 274–285. Kelvin Xu, Jimmy Ba, Ryan Kiros, Kyunghyun Cho, Aaron Courville, Ruslan Salakhudinov, Rich Zemel, and Yoshua Bengio. 2015. Show, attend and tell: Neural image caption generation with visual attention. In Proc. ICML, pages 2048–2057.
2018
80
Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Long Papers), pages 877–888 Melbourne, Australia, July 15 - 20, 2018. c⃝2018 Association for Computational Linguistics 877 Generating Fine-Grained Open Vocabulary Entity Type Descriptions Rajarshi Bhowmik and Gerard de Melo Department of Computer Science Rutgers University – New Brunswick Piscataway, NJ, USA {rajarshi.bhowmik, gerard.demelo}@cs.rutgers.edu Abstract While large-scale knowledge graphs provide vast amounts of structured facts about entities, a short textual description can often be useful to succinctly characterize an entity and its type. Unfortunately, many knowledge graph entities lack such textual descriptions. In this paper, we introduce a dynamic memory-based network that generates a short open vocabulary description of an entity by jointly leveraging induced fact embeddings as well as the dynamic context of the generated sequence of words. We demonstrate the ability of our architecture to discern relevant information for more accurate generation of type description by pitting the system against several strong baselines. 1 Introduction Broad-coverage knowledge graphs such as Freebase, Wikidata, and NELL are increasingly being used in many NLP and AI tasks. For instance, DBpedia and YAGO were vital for IBM’s Watson! Jeopardy system (Welty et al., 2012). Google’s Knowledge Graph is tightly integrated into its search engine, yielding improved responses for entity queries as well as for question answering. In a similar effort, Apple Inc. is building an inhouse knowledge graph to power Siri and its next generation of intelligent products and services. Despite being rich sources of factual knowledge, cross-domain knowledge graphs often lack a succinct textual description for many of the existing entities. Fig. 1 depicts an example of a concise entity description presented to a user. Descriptions of this sort can be beneficial both to humans and in downstream AI and natural language processing tasks, including question answering (e.g., Who Figure 1: A motivating example question that demonstrates the importance of short textual descriptions. is Roger Federer?), named entity disambiguation (e.g., Philadelphia as a city vs. the film or even the brand of cream cheese), and information retrieval, to name but a few. Additionally, descriptions of this sort can also be useful to determine the ontological type of an entity – another challenging task that often needs to be addressed in cross-domain knowledge graphs. Many knowledge graphs already provide ontological type information, and there has been substantial previous research on how to predict such types automatically for entities in knowledge graphs (Neelakantan and Chang, 2015; Miao et al., 2016; Kejriwal and Szekely, 2017), in semistructured resources such as Wikipedia (Ponzetto and Strube, 2007; de Melo and Weikum, 2010), or even in unstructured text (Snow et al., 2006; Bansal et al., 2014; Tandon et al., 2015). However, most such work has targeted a fixed inventory of types from a given target ontology, many 878 of which are more abstract in nature (e.g., human or artifact). In this work, we consider the task of generating more detailed open vocabulary descriptions (e.g., Swiss tennis player) that can readily be presented to end users, generated from facts in the knowledge graph. Apart from type descriptions, certain knowledge graphs, such as Freebase and DBpedia, also provide a paragraph-length textual abstract for every entity. In the latter case, these are sourced from Wikipedia. There has also been research on generating such abstracts automatically (Biran and McKeown, 2017). While abstracts of this sort provide considerably more detail than ontological types, they are not sufficiently concise to be grasped at a single glance, and thus the onus is put on the reader to comprehend and summarize them. Typically, a short description of an entity will hence need to be synthesized just by drawing on certain most relevant facts about it. While in many circumstances, humans tend to categorize entities at a level of abstraction commonly referred to as basic level categories (Rosch et al., 1976), in an information seeking setting, however, such as in Fig. 1, humans naturally expect more detail from their interlocutor. For example, occupation and nationality are often the two most relevant properties used in describing a person in Wikidata, while terms such as person or human being are likely to be perceived as overly unspecific. However, choosing such most relevant and distinctive attributes from the set of available facts about the entity is non-trivial, especially given the diversity of different kinds of entities in broad-coverage knowledge graphs. Moreover, the generated text should be coherent, succinct, and non-redundant. To address this problem, we propose a dynamic memory-based generative network that can generate short textual descriptions from the available factual information about the entities. To the best of our knowledge, we are the first to present neural methods to tackle this problem. Previous work has suggested generating short descriptions using predefined templates (cf. Section 4). However, this approach severely restricts the expressivity of the model and hence such templates are typically only applied to very narrow classes of entities. In contrast, our goal is to design a broad-coverage open domain description generation architecture. In our experiments, we induce a new benchmark dataset for this task by relying on Wikidata, which has recently emerged as the most popular crowdsourced knowledge base, following Google’s designation of Wikidata as the successor to Freebase (Tanon et al., 2016). With a broad base of 19,000 casual Web users as contributors, Wikidata is a crucial source of machine-readable knowledge in many applications. Unlike DBpedia and Freebase, Wikidata usually contains a very concise description for many of its entities. However, because Wikidata is based on user contributions, many new entries are created that still lack such descriptions. This can be a problem for downstream tools and applications using Wikidata for background knowledge. Hence, even for Wikidata, there is a need for tools to generate fine-grained type descriptions. Fortunately, we can rely on the entities for which users have already contributed short descriptions to induce a new benchmark dataset for the task of automatically inducing type descriptions from structured data. 2 A Dynamic Memory-based Generative Network Architecture Our proposed dynamic memory-based generative network consists of three key components: an input module, a dynamic memory module, and an output module. A schematic diagram of these are given in Fig. 2. 2.1 Input Module The input to the input module is a set of N facts F = {f1, f2, . . . , fN} pertaining to an entity. Each of these input facts are essentially (s, p, o) triples, for subjects s, predicates p, and objects o. Upon being encoded into a distributed vector representation, we refer to them as fact embeddings. Although many different encoding schemes can be adopted to obtain such fact embeddings, we opt for a positional encoding as described by Sukhbaatar et al. (2015), motivated in part by the considerations given by Xiong et al. (2016). For completeness, we describe the positional encoding scheme here. We encode each fact fi as a vector fi = PJ j=1 lj◦wi j, where ◦is an element-wise multiplication, and lj is a column vector with the structure lkj = (1 −j J ) −(k/d)(1 −2 j J ), with J being the number of words in the factual phrase, wi j as the embedding of the j-th word, and d as the dimensionality of the embedding. Details about how these factual phrases are formed for our data are 879 Figure 2: Model architecture. given in Section 3.3. Thus, the output of this module is a concatenation of N fact embeddings F = [f1; f2; . . . ; fN]. 2.2 Dynamic Memory Module The dynamic memory module is responsible for memorizing specific facts about an entity that will be useful for generating the next word in the output description sequence. Intuitively, such a memory should be able to update itself dynamically by accounting not only for the factual embeddings but also for the current context of the generated sequence of words. To begin with, the memory is initialized as m(0) = max(0, WmF + bm). At each time step t, the memory module attempts to gather pertinent contextual information by attending to and summing over the fact embeddings in a weighted manner. These attention weights are scalar values informed by two factors: (1) how much information from a particular fact is used by the previous memory state m(t−1), and (2) how much information of a particular fact is invoked in the current context of the output sequence h(t−1). Formally, xi(t) = [|fi −h(t−1)|; |fi −m(t−1)|], (1) zi(t) = W2 tanh(W1xi(t) + b1) + b2, (2) a(t) i = exp(zi(t)) PN k=1 exp(zk(t)) , (3) where |.| is the element-wise absolute difference and [; ] denotes the concatenation of vectors. Having obtained the attention weights, we apply a soft attention mechanism to extract the current context vector at time t as c(t) = N X i=1 a(t) i fi. (4) This newly obtained context information is then used along with the previous memory state to update the memory state as follows: C(t) = [m(t−1); c(t); h(t−1)] (5) m(t) = max(0, WmC(t) + bm) (6) Such updated memory states serve as the input to the decoder sequence of the output module at each time step. 2.3 Output Module The output module governs the process of repeatedly decoding the current memory state so as to emit the next word in an ordered sequence of output words. We rely on GRUs for this. At each time step, the decoder GRU is presented as input a glimpse of the current memory state m(t) as well as the previous context of the output sequence, i.e., the previous hidden state of the decoder h(t−1). At each step, the resulting output of the GRU is concatenated with the context vector ci(t) and is passed through a fully connected 880 layer and finally through a softmax layer. During training, we deploy teacher forcing at each step by providing the vector embedding of the previous correct word in the sequence as an additional input. During testing, when such a signal is not available, we use the embedding of the predicted word in the previous step as an additional input to the current step. Formally, h(t) = GRU([m(t); w(t−1)], h(t−1)), (7) ˜h(t) = tanh(Wd[h(t); c(t)] + bd), (8) ˆy(t) = Softmax(Wo˜h(t) + bo), (9) where [; ] is the concatenation operator, w(t−1) is vector embedding of the previous word in the sequence, and ˆy(t) is the probability distribution for the predicted word over the vocabulary at the current step. 2.4 Loss Function and Training Training this model amounts to picking suitable values for the model parameters θ, which include the matrices W1, W2, Wm, Wd, Wo and the corresponding bias terms b1, b2, bm, bd, and bo as well as the various transition and output matrices of the GRU. To this end, if each of the training instances has a description with a maximum of M words, we can rely on the categorical cross-entropy over the entire output sequence as the loss function: L(θ) = − M X t=1 |V| X j=1 y(t) j log(ˆy(t) j ). (10) where y(t) j ∈{0, 1} and |V| is the vocabulary size. We train our model end-to-end using Adam as the optimization technique. 3 Evaluation In this section, we describe the process of creating our benchmark dataset as well as the baseline methods and the experimental results. 3.1 Benchmark Dataset Creation For the evaluation of our method, we introduce a novel benchmark dataset that we have extracted from Wikidata and transformed to a suitable format. We rely on the official RDF exports of Wikidata, which are generated regularly (Erxleben et al., 2014), specifically, the RDF dump dated 2016-08-01, which consists of 19,768,780 entities with 2,570 distinct properties. A pair of a property and its corresponding value represents a fact about an entity. In Wikidata parlance, such facts are called statements. We sample a dataset of 10K entities from Wikidata, and henceforth refer to the resulting dataset as WikiFacts10K. Our sampling method ensures that each entity in WikiFacts10K has an English description and at least 5 associated statements. We then transform each extracted statement into a phrasal form by concatenating the words of the property name and its value. For example, the (subject, predicate, object) triple (Roger Federer, occupation, tennis player) is transformed to ’occupation tennis player’. We refer to these phrases as the factual phrases, which are embedded as described earlier. We randomly divide this dataset into training, validation, and test sets with a 8:1:1 ratio. We have made our code and data available1 for reproducibility and to facilitate further research in this area. 3.2 Baselines We compare our model against an array of baselines of varying complexity. We experiment with some variants of our model as well as several other state-of-the-art models that, although not specifically designed for this setting, can straightforwardly be applied to the task of generating descriptions from factual data. 1. Facts-to-sequence Encoder-Decoder Model. This model is a variant of the standard sequence-to-sequence encoderdecoder architecture described by Sutskever et al. (2014). However, instead of an input sequence, it here operates on a set of fact embeddings {f1, f2, . . . , fN}, which are emitted by the positional encoder described in Section 2.1. We initialize the hidden state of the decoder with a linear transformation of the fact embeddings as h(0) = WF + b, where F = [f1; f2; . . . ; fN] is the concatenation of N fact embeddings. As an alternative, we also experimented with a sequence encoder that takes a separate fact embedding as input at each step and initializes the decoder hidden state with the final hidden state of the encoder. However, this approach did not yield us better results. 1https://github.com/kingsaint/Open-vocabulary-entitytype-description 881 Table 1: Automatic evaluation results of different models. For a detailed explanation of the baseline models, please refer to Section 3.2. The best performing model for each column is highlighted in boldface. Model B-1 B-2 B-3 B-4 ROUGE-L METEOR CIDEr Facts-to-seq 0.404 0.324 0.274 0.242 0.433 0.214 1.627 Facts-to-seq w. Attention 0.491 0.414 0.366 0.335 0.512 0.257 2.207 Static Memory 0.374 0.298 0.255 0.223 0.383 0.185 1.328 DMN+ 0.281 0.234 0.236 0.234 0.275 0.139 0.912 Our Model 0.611 0.535 0.485 0.461 0.641 0.353 3.295 2. Facts-to-sequence Model with Attention Decoder. The encoder of this model is identical to the one described above. The difference is in the decoder module that uses an attention mechanism. At each time step t, the decoder GRU receives a context vector c(t) as input, which is an attention weighted sum of the fact embeddings. The attention weights and the context vectors are computed as follows: x(t) = [w(t−1); h(t−1)] (11) z(t) = Wx(t) + b (12) a(t) = softmax(z(t)) (13) c(t) = max(0, N X i=1 a(t) i fi) (14) After obtaining the context vector, it is fed to the GRU as input: h(t) = GRU([w(t−1); c(t)], h(t−1)) (15) 3. Static Memory Model. This is a variant of our model in which we do not upgrade the memory dynamically at each time step. Rather, we use the initial memory state as the input to all of the decoder GRU steps. 4. Dynamic Memory Network (DMN+). We consider the approach proposed by Xiong et al. (2016), which supersedes Kumar et al. (2016). However, some minor modifications are needed to adapt it to our task. Unlike the bAbI dataset, our task does not involve any question. The presence of a question is imperative in DMN+, as it helps to determine the initial state of the episodic memory module. Thus, we prepend an interrogative phrase such as ”Who is” or ”What is” to every entity name. The question module of the DMN+ is hence presented with a question such as ”Who is Roger Federer?” or ”What is Star Wars?”. Another difference is in the output module. In DMN+, the final memory state is passed through a softmax layer to generate the answer. Since most answers in the bAbI dataset are unigrams, such an approach suffices. However, as our task is to generate a sequence of words as descriptions, we use a GRU-based decoder sequence model, which at each time step receives the final memory state m(T) as input to the GRU. We restrict the number of memory update episodes to 3, which is also the preferred number of episodes in the original paper. 3.3 Experimental Setup For each entity in the WikiFacts10K dataset, there is a corresponding set of facts expressed as factual phrases as defined earlier. Each factual phrase in turn is encoded as a vector by means of the positional encoding scheme described in Section 2.1. Although other variants could be considered, such as LSTMs and GRUs, we apply this standard fact encoding mechanism for our model as well as all our baselines for the sake of uniformity and fair comparison. Another factor that makes the use of a sequence encoder such as LSTMs or GRUs less suitable is that the set of input facts is essentially unordered without any temporal correlation between facts. We fixed the dimensionality of the fact embeddings and all hidden states to be 100. The vocabulary size is 29K. Our models and all other baselines are trained for a maximum of 25 epochs with an early stopping criterion and a fixed learning rate of 0.001. To evaluate the quality of the generated descriptions, we rely on the standard BLEU (B-1, B-2, B-3, B-4), ROUGE-L, METEOR and CIDEr metrics, as implemented by Sharma et al. (2017). Of course, we would be remiss not to point out that these metrics are imperfect. In general, they tend 882 to be conservative in that they only reward generated descriptions that overlap substantially with the ground truth descriptions given in Wikidata. In reality, it may of course be the case that alternative descriptions are equally appropriate. In fact, inspecting the generated descriptions, we found that our method often indeed generates correct alternative descriptions. For instance, Darius Kaiser is described as a cyclist, but one could also describe him as a German bicycle racer. Despite their shortcomings, the aforementioned metrics have generally been found suitable for comparing supervised systems, in that systems with significantly higher scores tend to fare better at learning to reproduce ground truth captions. 3.4 Results The results of the experiments are reported in Table 1. Across all metrics, we observe that our model obtains significantly better scores than the alternatives. A facts-to-seq model exploiting our positional fact encoding performs adequately. With an additional attention mechanism (Facts-to-seq w. Attention), the results are even better. This is on account of the attention mechanism’s ability to reconsider the attention distribution at each time step using the current context of the output sequence. The results suggest that this enables the model to more flexibly focus on the most pertinent parts of the input. In this regard, such a model thus resembles our approach. However, there are important differences between this baseline and our model. Our model not only uses the current context of the output sequence, but also memorizes how information of a particular fact has been used thus far, via the dynamic memory module. We conjecture that the dynamic memory module thereby facilitates generating longer description sequences more accurately by better tracking which parts have been attended to, as is empirically corroborated by the comparably higher BLEU scores for longer n-grams. The analysis of the Static Memory approach amounts to an ablation study, as it only differs from our full model in lacking memory updates. The divergence of scores between the two variants suggests that the dynamic memory indeed is vital for more dynamically attending to the facts by taking into account the current context of the output sequence at each step. Our model needs to dynamically achieve different objectives at different time points. For instance, it may start off looking at several properties to infer a type of the appropriate granularity for the entity (e.g., village), while in the following steps it considers a salient property and emits the corresponding named entity for it as well as a suitable preposition (e.g., in China). Finally, the poor results of the DMN+ approach show that a na¨ıve application of a state-of-theart dynamic memory architecture does not suffice to obtain strong results on this task. Indeed, the DMN+ is even outperformed by our Facts-to-seq baseline. This appears to stem from the inability of the model to properly memorize all pertinent facts in its encoder. Analysis. In Figure 3, we visualize the attention distribution over facts. We observe how the model shifts its focus to different sorts of properties while generating successive words. Table 2 provides a representative sample of the generated descriptions and their ground truth counterparts. A manual inspection reveals five distinct patterns. The first case is that of exact matches with the reference descriptions. The second involves examples on which there is a high overlap of words between the ground truth and generated descriptions, but the latter as a whole is incorrect because of semantic drift or other challenges. In some cases, the model may have never seen a word or named entity during training (e.g., Hypocrisy), or their frequency is very limited in the training set. While it has been shown that GRUs with an attention mechanism are capable of learning to copy random strings from the input (Gu et al., 2016), we conjecture that a dedicated copy mechanism may help to mitigate this problem, which we will explore in future research. In other cases, the model conflates semantically related concepts, as is evident from examples such as a film being described as a filmmaker and a polo player as a water polo player. Next, the third group involves generated descriptions that are more specific than the ground truth, but correct, while, in the fourth group, the generated outputs generalize the descriptions to a certain extent. For example, American musician and pianist is generalized as American musician, since musician is a hypernym of pianist. Finally, the last group consists of cases in which our model generated descriptions that are factually accurate and may be deemed appropriate despite diverging from the 883 Figure 3: An example of attention distribution over the facts while emitting words. The country of citizenship property gets the most attention while generating the first word French of the left description. For generating the next three words, the fact occupation attracts the most attention. Similarly, instance of attracts the most attention when generating the sequence Italian comune. Table 2: A representative sample of the generated descriptions and its comparison with the ground truth descriptions. Item Ground Truth Description Generated Description Matches Q20538915 painting by Claude Monet painting by Claude Monet Q10592904 genus of fungi genus of fungi Q669081 municipality in Austria municipality in Austria Q23588047 microbial protein found in microbial protein found in Mycobacterium abscessus Mycobacterium abscessus Semantic drift Q1777131 album by Hypocrisy album by Mandy Moore Q16164685 polo player water polo player Q849834 class of 46 electric locomotives class of 20 british 0-6-0t locomotives Q1434610 1928 film filmmaker More specific Q1865706 footballer Finnish footballer Q19261036 number natural number Q7807066 cricketer English cricketer Q10311160 Brazilian lawyer Brazilian lawyer and politician More general Q149658 main-belt asteroid asteroid Q448330 American musician and pianist American musician Q4801958 2011 Hindi film Indian film Q7815530 South Carolina politician American politician Alternative Q7364988 Dean of York British academic Q1165984 cyclist German bicycle racer Q6179770 recipient of the knight’s cross German general Q17660616 singer-songwriter Canadian musician reference descriptions to an extent that almost no overlapping words are shared with them. Note that such outputs are heavily penalized by the metrics considered in our evaluation. 4 Related Work Type Prediction. There has been extensive work on predicting the ontological types of entities in large knowledge graphs (Neelakantan and Chang, 2015; Miao et al., 2016; Kejriwal and Szekely, 2017; Shimaoka et al., 2017), in semistructured resources such as Wikipedia (Ponzetto and Strube, 2007; de Melo and Weikum, 2010), as well as in text (Del Corro et al., 2015; Yaghoobzadeh and Sch¨utze, 2015; Ren et al., 2016). However, the major shortcoming of these sorts of methods, including those aiming at more fine-grained typing, is that they assume that the set of candidate types is given as input, and the main remaining challenge is to pick the correct one(s). In contrast, our work yields descriptions that often indicate the type of entity, but typically are more natural-sounding and descriptive (e.g. French Impressionist artist) than the oftentimes abstract ontological types (such as human or artifact) chosen by type prediction methods. A separate, long-running series of work has obtained open vocabulary type predictions for named entities and concepts mentioned in text (Hearst, 1992; Snow et al., 2006), possibly also induc884 ing taxonomies from them (Poon and Domingos, 2010; Velardi et al., 2013; Bansal et al., 2014). However, these methods typically just need to select existing spans of text from the input as the output description. Text Generation from Structured Data. Research on methods to generate descriptions for entities has remained scant. Lebret et al. (2016) take Wikipedia infobox data as input and train a custom form of neural language model that, conditioned on occurrences of words in the input table, generates biographical sentences as output. However, their system is limited to a single kind of description (biographical sentences) that tend to share a common structure. Wang et al. (2016) focus on the problem of temporal ordering of extracted facts. Biran and McKeown (2017) introduced a template-based description generation framework for creating hybrid concept-to-text and text-to-text generation systems that produce descriptions of RDF entities. Their framework can be tuned for new domains, but does not yield a broad-coverage multi-domain model. Voskarides et al. (2017) first create sentence templates for specific entity relationships, and then, given a new relationship instance, generate a description by selecting the best template and filling the template slots with the appropriate entities from the knowledge graph. Kutlak et al. (2013) generates referring expressions by converting property-value pairs to text using a hand-crafted mapping scheme. Wiseman et al. (2017) considered the related task of mapping tables with numeric basketball statistics to natural language. They investigated an extensive array of current state-of-the-art neural pointer methods but found that template-based models outperform all neural models on this task by a significant margin. However, their method requires specific templates for each domain (for example, basketball games in their case). Applying template-based methods to cross-domain knowledge bases is highly challenging, as this would require too many different templates for different types of entities. Our dataset contains items of from a large number of diverse domains such as humans, books, films, paintings, music albums, genes, proteins, cities, scientific articles, etc., to name but a few. Chen and Mooney (2008) studied the task of taking representations of observations from a sports simulation (Robocup simulator) as input, e.g. pass(arg1=purple6, arg2=purple3), and generating game commentary. Liang et al. (2009) learned alignments between formal descriptions such as rainChance(time=26-30,mode=Def) and natural language weather reports. Mei et al. (2016) used LSTMs for these sorts of generation tasks, via a custom coarse-to-fine architecture that first determines which input parts to focus on. Much of the aforementioned work essentially involves aligning small snippets in the input to the relevant parts in the training output and then learning to expand such input snippets into full sentences. In contrast, in our task, alignments between parts of the input and the output do not suffice. Instead, describing an entity often also involves considering all available evidence about that entity to infer information about it that is often not immediately given. Rather than verbalizing facts, our method needs a complex attention mechanism to predict an object’s general type and consider the information that is most likely to appear salient to humans from across the entire input. The WebNLG Challenge (Gardent et al., 2017) is another task for generating text from structured data. However, this task requires a textual verbalization of every triple. On the contrary, the task we consider in this work is quite complementary in that a verbalization of all facts one-by-one is not the sought result. Rather, our task requires synthesizing a short description by carefully selecting the most relevant and distinctive facts from the set of all available facts about the entity. Due to these differences, the WebNLG dataset was not suitable for the research question considered by our paper. Neural Text Summarization. Generating entity descriptions is related to the task of text summarization. Most traditional work in this area was extractive in nature, i.e. it selects the most salient sentences from a given input text and concatenates them to form a shorter summary or presents them differently to the user (Yang et al., 2017). Abstractive summarization goes beyond this in generating new text not necessarily encountered in the input, as is typically necessary in our setting. The surge of sequence-to-sequence modeling of text via LSTMs naturally extends to the task of abstractive summarization by training a model to accept a longer sequence as input and learning to generate a shorter compressed sequence as a summary. Rush et al. (2015) employed this idea to generate a short headline from the first sentence of a text. Subsequent work investigated the use of 885 architectures such as pointer-generator networks to better cope with long input texts (See et al., 2017). Recently, Liu et al. (2018) presented a model that generates an entire Wikipedia article via a neural decoder component that performs abstractive summarization of multiple source documents. Our work differs from such previous work in that we do not consider a text sequence as input. Rather, our input are a series of entity relationships or properties, as reflected by our facts-to-sequence baselines in the experiments. Note that our task is in certain respects also more difficult than text summarization. While regular neural summarizers are often able to identify salient spans of text that can be copied to the output, our input is of a substantially different form than the desired output. Additionally, our goal is to make our method applicable to any entity with factual information that may not have a corresponding Wikipedia-like article available. Indeed, Wikidata currently has 46 million items, whereas the English Wikipedia has only 5.6 million articles. Hence, for the vast majority of items in Wikidata, no corresponding Wikipedia article is available. In such cases, a summarization baseline will not be effective. Episodic Memory Architectures. A number of neural models have been put forth that possess the ability to interact with a memory component. Recent advances in neural architectures that combine memory components with an attention mechanism exhibit the ability to extract and reason over factual information. A well-known example is the End-To-End Memory Network model by Sukhbaatar et al. (2015), which may make multiple passes over the memory input to facilitate multi-hop reasoning. These have been particularly successful on the bAbI test suite of artificial comprehension tests (Weston et al., 2015), due to their ability to extract and reason over the input. At the core of the Dynamic Memory Networks (DMN) architecture (Kumar et al., 2016) is an episodic memory module, which is updated at each episode with new information that is required to answer a predefined question. Our approach shares several commonalities with DMNs, as it is also endowed with a dynamic memory of this sort. However, there are also a number of significant differences. First of all, DMN and its improved version DMN+ (Xiong et al., 2016) assume sequential correlations between the sentences and rely on them for reasoning purposes. To this end, DMN+ needs an additional layer of GRUs, which is used to capture sequential correlations among sentences. Our model does not need any such layer, as facts in a knowledge graph do not necessarily possess any sequential interconnections. Additionally, DMNs assume a predefined number of memory episodes, with the final memory state being passed to the answer module. Unlike DMNs, our model uses the dynamic context of the output sequence to update the memory state. The number of memory updates in our model flexibly depends on the length of the generated sequence. DMNs also have an additional question module as input, which guides the memory updates and also the output, while our model does not leverage any such guiding factor. Finally, in DMNs, the output is typically a unigram, whereas our model emits a sequence of words. 5 Conclusion Short textual descriptions of entities facilitate instantaneous grasping of key information about entities and their types. Generating them from facts in a knowledge graph requires not only mapping the structured fact information to natural language, but also identifying the type of entity and then discerning the most crucial pieces of information for that particular type from the long list of input facts and compressing them down to a highly succinct form. This is very challenging in light of the very heterogeneous kinds of entities in our data. To this end, we have introduced a novel dynamic memory-based neural architecture that updates its memory at each step to continually reassess the relevance of potential input signals. We have shown that our approach outperforms several competitive baselines. In future work, we hope to explore the potential of this architecture on further kinds of data, including multimodal data (Long et al., 2018), from which one can extract structured signals. Our code and data is freely available.2 Acknowledgments This research is funded in part by ARO grant no. W911NF-17-C-0098 as part of the DARPA SocialSim program. 2https://github.com/kingsaint/ Open-vocabulary-entity-type-description 886 References Mohit Bansal, David Burkett, Gerard de Melo, and Dan Klein. 2014. Structured learning for taxonomy induction with belief propagation. In Proceedings of the 52nd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers). Association for Computational Linguistics, Baltimore, Maryland, pages 1041–1051. http://www.aclweb.org/anthology/P14-1098. Or Biran and Kathleen McKeown. 2017. Domainadaptable hybrid generation of RDF entity descriptions. In Proceedings of the Eighth International Joint Conference on Natural Language Processing, IJCNLP 2017, Taipei, Taiwan, November 27 December 1, 2017 - Volume 1: Long Papers. pages 306–315. https://aclanthology.info/papers/I171031/i17-1031. David L. Chen and Raymond J. Mooney. 2008. Learning to sportscast: A test of grounded language acquisition. In Proceedings of the 25th International Conference on Machine Learning. ACM, New York, NY, USA, ICML ’08, pages 128–135. https://doi.org/10.1145/1390156.1390173. Gerard de Melo and Gerhard Weikum. 2010. MENTA: Inducing multilingual taxonomies from Wikipedia. In Jimmy Huang, Nick Koudas, Gareth Jones, Xindong Wu, Kevyn Collins-Thompson, and Aijun An, editors, Proceedings of the 19th ACM Conference on Information and Knowledge Management (CIKM 2010). ACM, New York, NY, USA, pages 1099– 1108. Luciano Del Corro, Abdalghani Abujabal, Rainer Gemulla, and Gerhard Weikum. 2015. FINET: Context-aware fine-grained named entity typing. In Proceedings of the 2015 Conference on Empirical Methods in Natural Language Processing. Association for Computational Linguistics, Lisbon, Portugal, pages 868–878. http://aclweb.org/anthology/D15-1103. Fredo Erxleben, Michael G¨unther, Markus Kr¨otzsch, Julian Mendez, and Denny Vrandeˇci´c. 2014. Introducing Wikidata to the Linked Data Web. In Peter Mika, Tania Tudorache, Abraham Bernstein, Chris Welty, Craig A. Knoblock, Denny Vrandeˇci´c, Paul T. Groth, Natasha F. Noy, Krzysztof Janowicz, and Carole A. Goble, editors, Proceedings of the 13th International Semantic Web Conference (ISWC’14). Springer, volume 8796 of LNCS, pages 50–65. Claire Gardent, Anastasia Shimorina, Shashi Narayan, and Laura Perez-Beltrachini. 2017. The webnlg challenge: Generating text from rdf data. In Proceedings of the 10th International Conference on Natural Language Generation. Association for Computational Linguistics, Santiago de Compostela, Spain, pages 124–133. http://www.aclweb.org/anthology/W17-3518. Jiatao Gu, Zhengdong Lu, Hang Li, and Victor O.K. Li. 2016. Incorporating copying mechanism in sequence-to-sequence learning. In Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers). Association for Computational Linguistics, Berlin, Germany, pages 1631–1640. http://www.aclweb.org/anthology/P16-1154. Marti A. Hearst. 1992. Automatic acquisition of hyponyms from large text corpora. In COLING. Mayank Kejriwal and Pedro Szekely. 2017. Supervised typing of big graphs using semantic embeddings. CoRR abs/1703.07805. http://arxiv.org/abs/1703.07805. Ankit Kumar, Ozan Irsoy, Peter Ondruska, Mohit Iyyer, James Bradbury, Ishaan Gulrajani, Victor Zhong, Romain Paulus, and Richard Socher. 2016. Ask me anything: Dynamic memory networks for natural language processing. In Maria Florina Balcan and Kilian Q. Weinberger, editors, Proceedings of The 33rd International Conference on Machine Learning. PMLR, New York, New York, USA, volume 48 of Proceedings of Machine Learning Research, pages 1378–1387. http://proceedings.mlr.press/v48/kumar16.html. Roman Kutlak, Kees van Deemter, and Christopher Stuart Mellish. 2013. Generation of referring expressions in large domains. R´emi Lebret, David Grangier, and Michael Auli. 2016. Generating text from structured data with application to the biography domain. CoRR abs/1603.07771. http://arxiv.org/abs/1603.07771. Percy Liang, Michael I. Jordan, and Dan Klein. 2009. Learning semantic correspondences with less supervision. In Proceedings of the Joint Conference of the 47th Annual Meeting of the ACL and the 4th International Joint Conference on Natural Language Processing of the AFNLP: Volume 1 Volume 1. Association for Computational Linguistics, Stroudsburg, PA, USA, ACL ’09, pages 91–99. http://dl.acm.org/citation.cfm?id=1687878.1687893. Peter Liu, Mohammad Saleh, Etienne Pot, Ben Goodrich, Ryan Sepassi, Lukasz Kaiser, and Noam Shazeer. 2018. Generating Wikipedia by summarizing long sequences. CoRR abs/1801.10198. http://arxiv.org/abs/1801.10198. Xiang Long, Chuang Gan, and Gerard de Melo. 2018. Video captioning with multi-faceted attention. Transactions of the Association for Computational Linguistics (TACL) 6:173–184. https://transacl.org/ojs/index.php/tacl/article/view/1289. Hongyuan Mei, Mohit Bansal, and Matthew R. Walter. 2016. What to talk about and how? Selective generation using LSTMs with coarse-to-fine alignment. In Proceedings of NAACL. 887 Qingliang Miao, Ruiyu Fang, Shuangyong Song, Zhongguang Zheng, Lu Fang, Yao Meng, and Jun Sun. 2016. Automatic identifying entity type in Linked Data. In Proceedings of the 30th Pacific Asia Conference on Language, Information and Computation, PACLIC 30, Seoul, Korea, October 28 - October 30, 2016. http://aclweb.org/anthology/Y/Y16/Y16-3009.pdf. Arvind Neelakantan and Ming-Wei Chang. 2015. Inferring missing entity type instances for knowledge base completion: New dataset and methods. In Proceedings of the 2015 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies. Association for Computational Linguistics, Denver, Colorado, pages 515–525. http://www.aclweb.org/anthology/N15-1054. Simone Paolo Ponzetto and Michael Strube. 2007. Deriving a large scale taxonomy from Wikipedia. In Proceedings of the 22Nd National Conference on Artificial Intelligence Volume 2. AAAI Press, AAAI’07, pages 1440–1445. http://dl.acm.org/citation.cfm?id=1619797.1619876. Hoifung Poon and Pedro Domingos. 2010. Unsupervised ontology induction from text. In Proceedings of the 48th Annual Meeting of the Association for Computational Linguistics. Association for Computational Linguistics, Stroudsburg, PA, USA, ACL ’10, pages 296–305. http://dl.acm.org/citation.cfm?id=1858681.1858712. Xiang Ren, Wenqi He, Meng Qu, Lifu Huang, Heng Ji, and Jiawei Han. 2016. AFET: Automatic finegrained entity typing by hierarchical partial-label embedding. In EMNLP. Eleanor Rosch, Carolyn B. Mervis, Wayne D. Gray, David M. Johnson, and Penny Boyes-Braem. 1976. Basic objects in natural categories. Cognitive Psychology . Alexander M Rush, Sumit Chopra, and Jason Weston. 2015. A neural attention model for abstractive sentence summarization. arXiv preprint arXiv:1509.00685 . Abigail See, Peter J. Liu, and Christopher D. Manning. 2017. Get to the point: Summarization with pointergenerator networks. In Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics, ACL 2017, Vancouver, Canada, July 30 - August 4, Volume 1: Long Papers. pages 1073– 1083. https://doi.org/10.18653/v1/P17-1099. Shikhar Sharma, Layla El Asri, Hannes Schulz, and Jeremie Zumer. 2017. Relevance of unsupervised metrics in task-oriented dialogue for evaluating natural language generation. CoRR abs/1706.09799. http://arxiv.org/abs/1706.09799. Sonse Shimaoka, Pontus Stenetorp, Kentaro Inui, and Sebastian Riedel. 2017. Neural architectures for fine-grained entity type classification. In Proceedings of the 15th Conference of the European Chapter of the Association for Computational Linguistics: Volume 1, Long Papers. Association for Computational Linguistics, Valencia, Spain, pages 1271– 1280. http://www.aclweb.org/anthology/E17-1119. Rion Snow, Daniel Jurafsky, and Andrew Y. Ng. 2006. Semantic taxonomy induction from heterogenous evidence. In Proceedings of the 21st International Conference on Computational Linguistics and the 44th Annual Meeting of the Association for Computational Linguistics. Association for Computational Linguistics, Stroudsburg, PA, USA, ACL-44, pages 801–808. https://doi.org/10.3115/1220175.1220276. Sainbayar Sukhbaatar, Arthur Szlam, Jason Weston, and Rob Fergus. 2015. End-to-end memory networks. In C. Cortes, N. D. Lawrence, D. D. Lee, M. Sugiyama, and R. Garnett, editors, Advances in Neural Information Processing Systems 28, Curran Associates, Inc., pages 2440– 2448. http://papers.nips.cc/paper/5846-end-to-endmemory-networks.pdf. Ilya Sutskever, Oriol Vinyals, and Quoc V Le. 2014. Sequence to sequence learning with neural networks. In Z. Ghahramani, M. Welling, C. Cortes, N. D. Lawrence, and K. Q. Weinberger, editors, Advances in Neural Information Processing Systems 27, Curran Associates, Inc., pages 3104– 3112. http://papers.nips.cc/paper/5346-sequenceto-sequence-learning-with-neural-networks.pdf. Niket Tandon, Gerard de Melo, Abir De, and Gerhard Weikum. 2015. Knowlywood: Mining activity knowledge from Hollywood narratives. In Proceedings of CIKM 2015. Thomas Pellissier Tanon, Denny Vrandeˇci´c, Sebastian Schaffert, Thomas Steiner, and Lydia Pintscher. 2016. From Freebase to Wikidata: The great migration. In World Wide Web Conference. Paola Velardi, Stefano Faralli, and Roberto Navigli. 2013. OntoLearn reloaded: A graphbased algorithm for taxonomy induction. Computational Linguistics 39(3):665–707. https://doi.org/10.1162/COLI a 00146. Nikos Voskarides, Edgar Meij, and Maarten de Rijke. 2017. Generating descriptions of entity relationships. In ECIR 2017: 39th European Conference on Information Retrieval. Springer, LNCS. Yafang Wang, Zhaochun Ren, Martin Theobald, Maximilian Dylla, and Gerard de Melo. 2016. Summary generation for temporal extractions. In Proceedings of 27th International Conference on Database and Expert Systems Applications (DEXA 2016). Chris Welty, J. William Murdock, Aditya Kalyanpur, and James Fan. 2012. A comparison of hard filters and soft evidence for answer typing in Watson. In Philippe Cudr´e-Mauroux, Jeff Heflin, 888 Evren Sirin, Tania Tudorache, J´erˆome Euzenat, Manfred Hauswirth, Josiane Xavier Parreira, Jim Hendler, Guus Schreiber, Abraham Bernstein, and Eva Blomqvist, editors, The Semantic Web – ISWC 2012. Springer Berlin Heidelberg, Berlin, Heidelberg, pages 243–256. Jason Weston, Antoine Bordes, Sumit Chopra, and Tomas Mikolov. 2015. Towards aicomplete question answering: A set of prerequisite toy tasks. CoRR abs/1502.05698. http://arxiv.org/abs/1502.05698. Sam Wiseman, Stuart Shieber, and Alexander Rush. 2017. Challenges in data-to-document generation. In Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing. Association for Computational Linguistics, Copenhagen, Denmark, pages 2253–2263. https://www.aclweb.org/anthology/D17-1239. Caiming Xiong, Stephen Merity, and Richard Socher. 2016. Dynamic memory networks for visual and textual question answering. In Proceedings of the 33rd International Conference on International Conference on Machine Learning Volume 48. JMLR.org, ICML’16, pages 2397–2406. http://dl.acm.org/citation.cfm?id=3045390.3045643. Yadollah Yaghoobzadeh and Hinrich Sch¨utze. 2015. Corpus-level fine-grained entity typing using contextual information. In Proceedings of the 2015 Conference on Empirical Methods in Natural Language Processing. Association for Computational Linguistics, Lisbon, Portugal, pages 715–725. http://aclweb.org/anthology/D15-1083. Qian Yang, Yong Cheng, Sen Wang, and Gerard de Melo. 2017. HiText: Text reading with dynamic salience marking. In Proceedings of WWW 2017 (Digital Learning Track). ACM.
2018
81
Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Long Papers), pages 889–898 Melbourne, Australia, July 15 - 20, 2018. c⃝2018 Association for Computational Linguistics 889 Hierarchical Neural Story Generation Angela Fan Mike Lewis Facebook AI Research, Menlo Park {angelafan, mikelewis, ynd}@fb.com Yann Dauphin Abstract We explore story generation: creative systems that can build coherent and fluent passages of text about a topic. We collect a large dataset of 300K human-written stories paired with writing prompts from an online forum. Our dataset enables hierarchical story generation, where the model first generates a premise, and then transforms it into a passage of text. We gain further improvements with a novel form of model fusion that improves the relevance of the story to the prompt, and adding a new gated multi-scale self-attention mechanism to model long-range context. Experiments show large improvements over strong baselines on both automated and human evaluations. Human judges prefer stories generated by our approach to those from a strong non-hierarchical model by a factor of two to one. 1 Introduction Story-telling is on the frontier of current text generation technology: stories must remain thematically consistent across the complete document, requiring modeling very long range dependencies; stories require creativity; and stories need a high level plot, necessitating planning ahead rather than word-by-word generation (Wiseman et al., 2017). We tackle the challenges of story-telling with a hierarchical model, which first generates a sentence called the prompt describing the topic for the story, and then conditions on this prompt when generating the story. Conditioning on the prompt or premise makes it easier to generate consistent stories because they provide grounding for the overall plot. It also reduces the tendency of standard sequence models to drift off topic. Prompt: The Mage, the Warrior, and the Priest Story: A light breeze swept the ground, and carried with it still the distant scents of dust and time-worn stone. The Warrior led the way, heaving her mass of armour and muscle over the uneven terrain. She soon crested the last of the low embankments, which still bore the unmistakable fingerprints of haste and fear. She lifted herself up onto the top the rise, and looked out at the scene before her. [...] Figure 1: Example prompt and beginning of a story from our dataset. We train a hierarchical model that first generates a prompt, and then conditions on the prompt when generating a story. We find that standard sequence-to-sequence (seq2seq) models (Sutskever et al., 2014) applied to hierarchical story generation are prone to degenerating into language models that pay little attention to the writing prompt (a problem that has been noted in other domains, such as dialogue response generation (Li et al., 2015a)). This failure is due to the complex and underspecified dependencies between the prompt and the story, which are much harder to model than the closer dependencies required for language modeling (for example, consider the subtle relationship between the first sentence and prompt in Figure 1). To improve the relevance of the generated story to its prompt, we introduce a fusion mechanism (Sriram et al., 2017) where our model is trained on top of an pre-trained seq2seq model. To improve over the pre-trained model, the second model must focus on the link between the prompt and the story. For the first time, we show that fusion mechanisms can help seq2seq models build dependencies between their input and output. Another major challenge in story generation is the inefficiency of modeling long documents with standard recurrent architectures—stories contain 734 words on average in our dataset. We improve efficiency using a convolutional architecture, al890 # Train Stories 272,600 # Test Stories 15,138 # Validation Stories 15,620 # Prompt Words 7.7M # Story Words 200M Average Length of Prompts 28.4 Average Length of Stories 734.5 Table 1: Statistics of WRITINGPROMPTS dataset lowing whole stories to be encoded in parallel. Existing convolutional architectures only encode a bounded amount of context (Dauphin et al., 2017), so we introduce a novel gated self-attention mechanism that allows the model to condition on its previous outputs at different time-scales. To train our models, we gathered a large dataset of 303,358 human generated stories paired with writing prompts from an online forum. Evaluating free form text is challenging, so we also introduce new evaluation metrics which isolate different aspects of story generation. Experiments show that our fusion and selfattention mechanisms improve over existing techniques on both automated and human evaluation measures. Our new dataset and neural architectures allow for models which can creatively generate longer, more consistent and more fluent passages of text. Human judges prefer our hierarchical model’s stories twice as often as those of a nonhierarchical baseline. 2 Writing Prompts Dataset We collect a hierarchical story generation dataset1 from Reddit’s WRITINGPROMPTS forum.2 WRITINGPROMPTS is a community where online users inspire each other to write by submitting story premises, or prompts, and other users freely respond. Each prompt can have multiple story responses. The prompts have a large diversity of topic, length, and detail. The stories must be at least 30 words, avoid general profanity and inappropriate content, and should be inspired by the prompt (but do not necessarily have to fulfill every requirement). Figure 1 shows an example. We scraped three years of prompts and their associated stories using the official Reddit API. We clean the dataset by removing automated bot posts, deleted posts, special announcements, com1 www.github.com/pytorch/fairseq 2www.reddit.com/r/WritingPrompts/ ments from moderators, and stories shorter than 30 words. We use NLTK for tokenization. The dataset models full text to generate immediately human-readable stories. We reserve 5% of the prompts for a validation set and 5% for a test set, and present additional statistics about the dataset in Table 1. For our experiments, we limit the length of the stories to 1000 words maximum and limit the vocabulary size for the prompts and the stories to words appearing more than 10 times each. We model an unknown word token and an end of document token. This leads to a vocabulary size of 19,025 for the prompts and 104,960 for the stories. As the dataset is scraped from an online forum, the number of rare words and misspellings is quite large, so modeling the full vocabulary is challenging and computationally intensive. 3 Approach The challenges of WRITINGPROMPTS are primarily in modeling long-range dependencies and conditioning on an abstract, high-level prompt. Recurrent and convolutional networks have successfully modeled sentences (Jozefowicz et al., 2016; Dauphin et al., 2017), but accurately modeling several paragraphs is an open problem. While seq2seq networks have strong performance on a variety of problems, we find that they are unable to build stories that accurately reflect the prompts. We will evaluate strategies to address these challenges in the following sections. 3.1 Hierarchical Story Generation High-level structure is integral to good stories, but language models generate on a strictly-word-byword basis and so cannot explicitly make highlevel plans. We introduce the ability to plan by decomposing the generation process into two levels. First, we generate the premise or prompt of the story using the convolutional language model from Dauphin et al. (2017). The prompt gives a sketch of the structure of the story. Second, we use a seq2seq model to generate a story that follows the premise. Conditioning on the prompt makes it easier for the story to remain consistent and also have structure at a level beyond single phrases. 891 Figure 2: Self-Attention Mechanism of a single head, with GLU gating and downsampling. Multiple heads are concatenated, with each head using a separate downsampling function. 3.2 Efficient Learning with Convolutional Sequence-to-Sequence Model The length of stories in our dataset is a challenge for RNNs, which process tokens sequentially. To transform prompts into stories, we instead build on the convolutional seq2seq model of Gehring et al. (2017), which uses deep convolutional networks as the encoder and decoder. Convolutional models are ideally suited to modeling long sequences, because they allow parallelism of computation within the sequence. In the Conv seq2seq model, the encoder and decoder are connected with attention modules (Bahdanau et al., 2015) that perform a weighted sum of encoder outputs, using attention at each layer of the decoder. 3.3 Modeling Unbounded Context with Gated Multi-Scale Self-attention CNNs can only model a bounded context window, preventing the modeling of long-range dependencies within the output story. To enable modeling of unbounded context, we supplement the decoder with a self-attention mechanism (Sukhbaatar et al., 2015; Vaswani et al., 2017), Figure 3: Multihead self-attention mechanism. The decoder layer depicted attends with itself to gate the input of the subsequent decoder layer. which allows the model to refer to any previously generated words. The self-attention mechanism improves the model’s ability to extract long-range context with limited computational impact due to parallelism. Gated Attention: Similar to Vaswani et al. (2017), we use multi-head attention to allow each head to attend to information at different positions. However, the queries, keys and values are not given by linear projections but by more expressive gated deep neural nets with Gated Linear Unit (Dauphin et al., 2017) activations. We show that gating lends the self-attention mechanism crucial capacity to make fine-grained selections. Multi-Scale Attention: Further, we propose to have each head operating at a different time scale, depicted in Figure 2. Thus the input to each head is downsampled a different amount—the first head sees the full input, the second every other input timestep, the third every third input timestep, etc. The different scales encourage the heads to attend to different information. The downsampling operation limits the number of tokens in the attention maps, making them sharper. The output of a single attention head is given by hL+1 0:t = Linear  v(hL 0:t−1) (1) ⊙softmax(q(hL 0:t)k(hL 0:t)⊤)  where hL 0:t contains the hidden states up to time t 892 at layer L, and q, k, v are gated downsampling networks as shown in Figure 2. Unlike Vaswani et al. (2017), we allow the model to optionally attend to a 0 vector at each timestep, if it chooses to ignore the information of past timesteps (see Figure 3). This mechanism allows the model to recover the non-self-attention architecture and avoid attending to the past if it provides only noise. Additionally, we do not allow the self-attention mechanism to attend to the current timestep, only the past. 3.4 Improving Relevance to Input Prompt with Model Fusion Unlike tasks such as translation, where the semantics of the target are fully specified by the source, the generation of stories from prompts is far more open-ended. We find that seq2seq models ignore the prompt and focus solely on modeling the stories, because the local dependencies required for language modeling are easier to model than the subtle dependencies between prompt and story. We propose a fusion-based approach to encourage conditioning on the prompt. We train a seq2seq model that has access to the hidden states of a pretrained seq2seq model. Doing so can be seen as a type of boosting or residual learning that allows the second model to focus on what the first model failed to learn—such as conditioning on the prompt. To our knowledge, this paper is the first to show that fusion reduces the problem of seq2seq models degenerating into language models that capture primarily syntactic and grammatical information. The cold fusion mechanism of Sriram et al. (2017) pretrains a language model and subsequently trains a seq2seq model with a gating mechanism that learns to leverage the final hidden layer of the language model during seq2seq training. We modify this approach by combining two seq2seq models as follows (see Figure 4): gt = σ(W[hTraining t ; hPretrained t ] + b) ht = gt ◦[hTraining t ; hPretrained t ] where the hidden state of the pretrained seq2seq model and training seq2seq model (represented by ht) are concatenated to learn gates gt. The gates are computed using a linear projection with the weight matrix W. The gated hidden layers are combined by concatenation and followed by more fully connected layers with GLU activations (see Figure 4: Diagram of our fusion model, which learns a second seq2seq model to improve a pretrained model. The separate hidden states are combined after gating through concatenation. Appendix). We use layer normalization (Ba et al., 2016) after each fully connected layer. 4 Related Work 4.1 Story Generation Sequence-to-sequence neural networks (Sutskever et al., 2014) have achieved state of the art performance on a variety of text generation tasks, such as machine translation (Sutskever et al., 2014) and summarization (Rush et al., 2015). Recent work has applied these models to more open-ended generation tasks, including writing Wikipedia articles (Liu et al., 2018) and poetry (Zhang and Lapata, 2014). Previous work on story generation has explored seq2seq RNN architectures (Roemmele, 2016), but has focused largely on using various content to inspire the stories. For instance, Kiros et al. (2015) uses photos to inspire short paragraphs trained on romance novels, and Jain et al. (2017) chain a series of independent descriptions together into a short story. Martin et al. (2017) decompose story generation into two steps, first converting text into event representations, then modeling stories as sequences of events before translating back to natural language. Similarly, Harrison et al. (2017) generate summaries of movies as sequences of events using an RNN, then sample event representations using MCMC. They find this technique can generate text of the desired genre, but the movie plots 893 are not interpretable (as the model outputs events, not raw text). However, we are not aware of previous work that has used hierarchical generation from a textual premise to improve the coherence and structure of stories. 4.2 Hierarchical Text Generation Previous work has proposed decomposing the challenge of generating long sequences of text into a hierarchical generation task. For instance, Li et al. (2015b) use an LSTM to hierarchically learn word, then sentence, then paragraph embeddings, then transform the paragraph embeddings into text. Yarats and Lewis (2017) generate a discrete latent variable based on the context, then generates text conditioned upon it. 4.3 Fusion Models Previous work has investigated the integration of language models with seq2seq models. The two models can be leveraged together without architectural modifications: Ramachandran et al. (2016) use language models to initialize the encoder and decoder side of the seq2seq model independently, and Chorowski and Jaitly (2016) combine the predictions of the language model and seq2seq model solely at inference time. Recent work has also proposed deeper integration. Gulcehre et al. (2015) combined a trained language model with a trained seq2seq model to learn a gating function that joins them. Sriram et al. (2017) propose training the seq2seq model given the fixed language model then learning a gate to filter the information from the language model. 5 Experimental Setup 5.1 Baselines We evaluate a number of baselines: (1) Language Models: Non-hierarchical models for story generation, which do not condition on the prompt. We use both the gated convolutional language (GCNN) model of Dauphin et al. (2017) and our additional self-attention mechanism. (2) seq2seq: using LSTMs and convolutional seq2seq architectures, and Conv seq2seq with decoder self-attention. (3) Ensemble: an ensemble of two Conv seq2seq with self-attention models. (4) KNN: we also compare with a KNN model to find the closest prompt in the training set for each prompt in the test set. A TF-IDF vector for Model Valid Perplexity Test Perplexity Conv seq2seq 45.27 45.54 + self-attention 42.01 42.32 + multihead 40.12 40.39 + multiscale 38.76 38.91 + gating 37.37 37.94 Table 2: Effect of new attention mechanism. Gated multi-scale attention significantly improves the perplexity on the WRITINGPROMPTS dataset. each prompt was created using FASTTEXT (Bojanowski et al., 2016) and FAISS (Johnson et al., 2017) was used for KNN search. The retrieved story from the training set is limited to 150 words to match the length of generated stories. 5.2 Fusion Training To train the fusion model, we first pretrain a Conv seq2seq with self-attention model on the WRITINGPROMPTS dataset. This pretrained model is fixed and provided to the second Conv seq2seq with self-attention model during training time. The two models are integrated with the fusion mechanism described in Section 3.4. 5.3 Training We implement models with the fairseq-py library in PyTorch. Similar to Gehring et al. (2017), we train using the Nesterov accelerated gradient method (Sutskever et al., 2013) using gradient clipping (Pascanu et al., 2013). We perform hyperparameter optimization on each of our models by cross-validating with random search on a validation set. We provide model architectures in the appendix. 5.4 Generation We generate stories from our models using a top-k random sampling scheme. At each timestep, the model generates the probability of each word in the vocabulary being the likely next word. We randomly sample from the k = 10 most likely candidates from this distribution. Then, subsequent timesteps generate words based on the previously selected words. We find this sampling strategy substantially more effective than beam search, which tends to produce common phrases and repetitive text from the training set (Vijayakumar et al., 2016; Shao et al., 2017). Sentences pro894 Model # Parameters (mil) Valid Perplexity Test Perplexity GCNN LM 123.4 54.50 54.79 GCNN + self-attention LM 126.4 51.84 51.18 LSTM seq2seq 110.3 46.83 46.79 Conv seq2seq 113.0 45.27 45.54 Conv seq2seq + self-attention 134.7 37.37 37.94 Ensemble: Conv seq2seq + self-attention 270.3 36.63 36.93 Fusion: Conv seq2seq + self-attention 255.4 36.08 36.56 Table 3: Perplexity on WRITINGPROMPTS. We dramatically improve over standard seq2seq models. Figure 5: Human accuracy at pairing stories with the prompts used to generate them. People find that our fusion model significantly improves the link between the prompt and generated stories. duced by beam search tend to be short and generic. Completely random sampling can introduce very unlikely words, which can damage generation as the model has not seen such mistakes at training time. The restriction of sampling from the 10 most likely candidates reduces the risk of these lowprobability samples. For each model, we tune a temperature parameter for the softmax at generation time. To ease human evaluation, we generate stories of 150 words and do not generate unknown word tokens. For prompt generation, we use a selfattentive GCNN language model trained with the same prompt-side vocabulary as the sequence-tosequence story generation models. The language model to generate prompts has a validation perplexity of 63.06. Prompt generation is conducted using the top-k random sampling from the 10 most likely candidates, and the prompt is completed when the language model generates the end of prompt token. 5.5 Evaluation We propose a number of evaluation metrics to quantify the performance of our models. Many commonly used metrics, such as BLEU for maFigure 6: Accuracy of prompt ranking. The fusion model most accurately pairs prompt and stories. Figure 7: Accuracy on the prompt/story pairing task vs. number of generated stories. Our generative fusion model can produce many stories without degraded performance, while the KNN can only produce a limited number relevant stories. Model Human Preference Language model 32.68% Hierarchical Model 67.32% Table 4: Effect of Hierarchical Generation. Human judges prefer stories that were generated hierarchically by first creating a premise and creating a full story based on it with a seq2seq model. 895 Figure 8: Average weighting of each model in our Fusion model for the beginning of the generated story for the prompt Gates of Hell. The fused model (orange) is primarily used for words which are closely related to the prompt, whereas generic words are generated by the pre-trained model (green). chine translation or ROUGE for summarization, compute an n-gram overlap between the generated text and the human text—however, in our openended generation setting, these are not useful. We do not aim to generate a specific story; we want to generate viable and novel stories. We focus on measuring both the fluency of our models and their ability to adhere to the prompt. For automatic evaluation, we measure model perplexity on the test set and prompt ranking accuracy. Perplexity is commonly used to evaluate the quality of language models, and it reflects how fluently the model can produce the correct next word given the preceding words. We use prompt ranking to assess how strongly a model’s output depends on its input. Stories are decoded under 10 different prompts—9 randomly sampled prompts and 1 true corresponding prompt—and the likelihood of the story given the various prompts is recorded. We measure the percentage of cases where the true prompt is the most likely to generate the story. In our evaluation, we examined 1000 stories from the test set for each model. For human evaluation, we use Amazon Mechanical Turk to conduct a triple pairing task. We use each model to generate stories based on heldout prompts from the test set. Then, groups of three stories are presented to the human judges. The stories and their corresponding prompts are shuffled, and human evaluators are asked to select the correct pairing for all three prompts. 105 stories per model are grouped into questions, and each question is evaluated by 15 judges. Lastly, we conduct human evaluation to evaluate the importance of hierarchical generation for story writing. We use Amazon Mechanical Turk to compare the stories from hierarchical generation from a prompt with generation without a prompt. 400 pairs of stories were evaluated by 5 judges each in a blind test. 6 Results We analyze the effect of our modeling improvements on the WRITINGPROMPTS dataset. Effect of Hierarchical Generation: We explore leveraging our dataset to perform hierarchical story generation by first using a self-attentive GCNN language model to generate a prompt, and then using a fusion model to write a story given the generated prompt. We evaluate the effect of hierarchical generation using a human study in Table 4. 400 stories were generated from a selfattentive GCNN language model, and another 400 were generated from our hierarchical fusion model given generated prompts from a language model. In a blind comparison where raters were asked to choose the story they preferred reading, human raters preferred the hierarchical model 67% of the time. Effect of new attention mechanism: Table 2 shows the effect of the proposed additions to the self-attention mechanism proposed by Vaswani et al. (2017). Table 3 shows that deep multi-scale self-attention and fusion each significantly improve the perplexity compared to the baselines. In combination these additions to the Conv seq2seq baseline reduce the perplexity by 9 points. Effect of model fusion: Results in Table 3 show that adding our fusion mechanism substantially improves the likelihood of human-generated stories, and even outperforms an ensemble despite having fewer parameters. We observe in Figure 5 that fusion has a much more significant impact on the topicality of the stories. In comparison, ensembling has no effect on people’s ability to associate stories with a prompt, but adding model fusion leads improves the pairing accuracy of the human judges by 7%. These results suggest that by training a second model on top of the first, we have encouraged that model to learn the challeng896 ing additional dependencies to relate to the source sequence. To our knowledge, these are the first results to show that fusion has such capabilities. Comparison with Nearest Neighbours: Nearest Neighbour Search (KNN) provides a strong baseline for text generation. Figure 5 shows that the fusion model can match the performance of nearest neighbour search in terms of the connection between the story and prompt. The real value in our generative approach is that it can produce an unlimited number of stories, whereas KNN can never generalize from its training data. To quantify this improvement, Figure 7 plots the relevance of the kth best story to a given prompt; the performance of KNN degrades much more rapidly. 7 Discussion 7.1 Generation Quality Our proposed fusion model is capable of generating unique text without copying directly from the training set. When analyzing 500 150-word generated stories from test-set prompts, the average longest common subsequence is 8.9. In contrast, the baseline Conv seq2seq model copies 10.2 words on average and the KNN baseline copies all 150 words from a story in the training set. Figure 8 shows the values of the fusion gates for an example story, averaged at each timestep. The pretrained seq2seq model acts similarly to a language model producing common words and punctuation. The second seq2seq model learns to focus on rare words, such as horned and robe. However, the fusion model has limitations. Using random sampling to generate can produce errors. For example, can’t is tokenized to ca n’t, and the model occasionally produces the first token but misses the second. A similar error is after one line of dialogue, the model may move to another line of dialogue without generating a newline token. A further obstacle is repetition. The model focuses frequently on what it has recently produced, which leads to the generation of similar text multiple times. In the generation of prompts using the GCNN language model, we find that prompts are fairly generic compared to human prompts. Language models often struggle to model rare words accurately, as the probability distribution over the next word is dominated by more common words. This tends to produce similar prompts, particularly at the start — we see many prompts that start with the man. In contrast, many of the human prompts are very unique (e.g. prompting stories in fantasy worlds such as Harry Potter and Game of Thrones) and the language model rarely produces the specific vocabulary required by these settings. 7.2 Use of Attention We analyze the encoder-decoder attention in the fusion model and find that unlike attention maps in machine translation, where each decoder timestep tends to attend to a different word on the encoderside, the attention map for each decoder timestep looks similar and focuses mainly on salient words in the prompt. We further look at the usage of the self-attention layers within the decoder. While they could be leveraged to look at words generated very far in the past, at many timesteps the selfattention focuses on the recent past. 8 Conclusion We have collected the first dataset for creative text generation based on short writing prompts. This new dataset pushes the boundaries of text generation by requiring longer range dependencies and conditioning on an abstract premise. Building on this dataset, we show through automatic and human evaluation that novel hierarchical models, self-attention mechanisms and model fusion significantly improves the fluency, topicality, and overall quality of the generated stories. References Jimmy Lei Ba, Jamie Ryan Kiros, and Geoffrey E Hinton. 2016. Layer normalization. arXiv preprint arXiv:1607.06450. Dzmitry Bahdanau, Kyunghyun Cho, and Yoshua Bengio. 2015. Neural machine translation by jointly learning to align and translate. International Conference on Learning Representation (ICLR). Piotr Bojanowski, Edouard Grave, Armand Joulin, and Tomas Mikolov. 2016. Enriching word vectors with subword information. arXiv preprint arXiv:1607.04606. Jan Chorowski and Navdeep Jaitly. 2016. Towards better decoding and language model integration in sequence to sequence models. arXiv preprint arXiv:1612.02695. Yann N. Dauphin, Angela Fan, Michael Auli, and David Grangier. 2017. Language modeling with gated convolutional networks. 897 Generation with Fusion of Conv seq2seq + Self-Attention Example Prompt 1: Aliens start abducting humans It has been two weeks, and the last of my kind has gone. It is only a matter of time until there will be nothing left. I’m not sure what the hell is going on... I can’t think. I can hear a distant scream. I think of a strange, alien sound. I try to speak, but am interrupted by something, something that feels like a drum, I ca not tell. I mean I’m just a little bit older than an average human. But I can, and I can feel the vibrations . I hear the sound of a ship approaching. The ground quakes at the force of the impact, and a loud explosion shatters the silence. Example Prompt 2: The scientists have discovered something terrible . The scientist stood there, a little dazed as he stared. “What is it?” He asked. “This...this...Thing...This is a virus. A chemical that can destroy entire planet and it is a very small, complex, chemical that could destroy any planet.” The scientist replied. His lab assistant looked down at the tablet. “I’ve just discovered it. I can’t believe it. It looks like it’s made of some sort of chemical that’s very dangerous.” “ Well, there’s a virus on the ground. It’s very effective...” “ I can’t believe what it is.” he said, pointing to the scientist . “ We don’t know what this thing is. We haven’t seen anything like it . We can’t even see anything like this. ” Dr. Jones stared at the scientist for a moment. “What do you mean what does it do ?” “It...It ’s a monster.” Example Prompt 3: The silence before the storm comes “I’m sorry,” she said “I can’t do this anymore.” She sat in the grass, the grass blowing behind her as she sat staring at the sky. Her eyes were still closed but she was still staring at it. Her hand was still shaking, her eyes closed as she watched the rain roll in and out of view. She could see the world through her eyelids now. Her hands were still shaking like she hadn’t eaten in days. Her fingers were still clenched tight and her fingers were now shaking. She wasn’t even sure if she was going to cry or if she just wanted to cry. It wasn’t until she was close enough to hear the words. “I ...I just...I need to...I ...” She was crying now and then, the rain was picking up. Her lips were shaking now and she could see that the storm was getting closer and closer. Her heart stopped when she heard the thunder. She knew it was coming. The storm was Generation with Language Model + Self-Attention Baseline Example 1: I’ve always been the curious type. Growing up in the ghetto, I’ve been around for more than a year now. I still haven’t heard it on my own. I’m not sure if I’m a good person. But I’m not the only one. I’ve been to the hospital, but it’s not my first time. I’ve been to my psychiatrist’s journals before. “Hey, are you ... are you okay?” I asked. “Yes” I said. “What’s wrong?” she replied, “I was just ... Doing something.” She said, “I’ve never been a bad person.” Example 2: The man was an accountant. He had to be. He had to be the next president. I looked back over the top and saw that his wife was crying in the kitchen. I looked at the clock. It seemed to be coming slower, but I knew if I did it would not be long before I was in my own home. I wasn’t sure. I had a hard time finding the right words to say. I was about to leave when he suddenly became angry and began talking to me. “Hello, sir, I’m John. What is your name?” “My name is Manuel and I’m a journalist.” I said Table 5: Example stories generated by the proposed hierarchical fusion approach compared to stories generated by a language model. Stories generated by the fusion model relate to the desired prompt and show increased coherence between sentences and ability to stay on one topic compared to the language modeling baseline. Jonas Gehring, Michael Auli, David Grangier, Denis Yarats, and Yann Dauphin. 2017. Convolutional sequence to sequence learning. Caglar Gulcehre, Orhan Firat, Kelvin Xu, Kyunghyun Cho, Loic Barrault, Huei-Chi Lin, Fethi Bougares, Holger Schwenk, and Yoshua Bengio. 2015. On using monolingual corpora in neural machine translation. arXiv preprint arXiv:1503.03535. Brent Harrison, Christopher Purdy, and Mark O Riedl. 2017. Toward automated story generation with markov chain monte carlo methods and deep neural networks. Parag Jain, Priyanka Agrawal, Abhijit Mishra, Mohak Sukhwani, Anirban Laha, and Karthik Sankaranarayanan. 2017. Story generation from sequence of independent short descriptions. arXiv preprint arXiv:1707.05501. Jeff Johnson, Matthijs Douze, and Herv´e J´egou. 2017. Billion-scale similarity search with gpus. arXiv preprint arXiv:1702.08734. Rafal Jozefowicz, Oriol Vinyals, Mike Schuster, Noam Shazeer, and Yonghui Wu. 2016. Exploring the limits of language modeling. arXiv preprint arXiv:1602.02410. 898 Ryan Kiros, Yukun Zhu, Ruslan Salakhutdinov, Richard S Zemel, Antonio Torralba, Raquel Urtasun, and Sanja Fidler. 2015. Skip-thought vectors. arXiv preprint arXiv:1506.06726. Jiwei Li, Michel Galley, Chris Brockett, Jianfeng Gao, and Bill Dolan. 2015a. A diversity-promoting objective function for neural conversation models. arXiv preprint arXiv:1510.03055. Jiwei Li, Minh-Thang Luong, and Dan Jurafsky. 2015b. A hierarchical neural autoencoder for paragraphs and documents. arXiv preprint arXiv:1506.01057. Peter J. Liu, Mohammad Saleh, Etienne Pot, Ben Goodrich, Ryan Sepassi, Lukasz Kaiser, and Noam Shazeer. 2018. Generating wikipedia by summarizing long sequences. arXiv preprint arXiv:1801.10198. Lara J Martin, Prithviraj Ammanabrolu, William Hancock, Shruti Singh, Brent Harrison, and Mark O Riedl. 2017. Event representations for automated story generation with deep neural nets. arXiv preprint arXiv:1706.01331. Razvan Pascanu, Tomas Mikolov, and Yoshua Bengio. 2013. On the difficulty of training recurrent neural networks. In ICML. Prajit Ramachandran, Peter J Liu, and Quoc V Le. 2016. Unsupervised pretraining for sequence to sequence learning. arXiv preprint arXiv:1611.02683. Melissa Roemmele. 2016. Writing stories with help from recurrent neural networks. In AAAI. Alexander M Rush, Sumit Chopra, and Jason Weston. 2015. A neural attention model for abstractive sentence summarization. arXiv preprint arXiv:1509.00685. Louis Shao, Stephan Gouws, Denny Britz, Anna Goldie, Brian Strope, and Ray Kurzweil. 2017. Generating long and diverse responses with neural conversation models. arXiv preprint arXiv:1701.03185. Anuroop Sriram, Heewoo Jun, Sanjeev Satheesh, and Adam Coates. 2017. Cold fusion: Training seq2seq models together with language models. arXiv preprint arXiv:1708.06426. Sainbayar Sukhbaatar, Jason Weston, Rob Fergus, et al. 2015. End-to-end memory networks. In Advances in neural information processing systems, pages 2440–2448. Ilya Sutskever, James Martens, George E. Dahl, and Geoffrey E. Hinton. 2013. On the importance of initialization and momentum in deep learning. In ICML. Ilya Sutskever, Oriol Vinyals, and Quoc V Le. 2014. Sequence to sequence learning with neural networks. In Neural Information Processing Systems (NIPS). Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, Łukasz Kaiser, and Illia Polosukhin. 2017. Attention is all you need. In Advances in Neural Information Processing Systems, pages 6000–6010. Ashwin K Vijayakumar, Michael Cogswell, Ramprasath R Selvaraju, Qing Sun, Stefan Lee, David Crandall, and Dhruv Batra. 2016. Diverse beam search: Decoding diverse solutions from neural sequence models. arXiv preprint arXiv:1610.02424. Sam Wiseman, Stuart M Shieber, and Alexander M Rush. 2017. Challenges in data-to-document generation. arXiv preprint arXiv:1707.08052. Denis Yarats and Mike Lewis. 2017. Hierarchical text generation and planning for strategic dialogue. arXiv preprint arXiv:1712.05846. Xingxing Zhang and Mirella Lapata. 2014. Chinese poetry generation with recurrent neural networks. In Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing (EMNLP), pages 670–680.
2018
82
Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Long Papers), pages 899–909 Melbourne, Australia, July 15 - 20, 2018. c⃝2018 Association for Computational Linguistics 899 No Metrics Are Perfect: Adversarial Reward Learning for Visual Storytelling Xin Wang∗, Wenhu Chen∗, Yuan-Fang Wang , William Yang Wang University of California, Santa Barbara {xwang,wenhuchen,yfwang,william}@cs.ucsb.edu Abstract Though impressive results have been achieved in visual captioning, the task of generating abstract stories from photo streams is still a little-tapped problem. Different from captions, stories have more expressive language styles and contain many imaginary concepts that do not appear in the images. Thus it poses challenges to behavioral cloning algorithms. Furthermore, due to the limitations of automatic metrics on evaluating story quality, reinforcement learning methods with hand-crafted rewards also face difficulties in gaining an overall performance boost. Therefore, we propose an Adversarial REward Learning (AREL) framework to learn an implicit reward function from human demonstrations, and then optimize policy search with the learned reward function. Though automatic evaluation indicates slight performance boost over state-of-the-art (SOTA) methods in cloning expert behaviors, human evaluation shows that our approach achieves significant improvement in generating more human-like stories than SOTA systems. Code will be made available here1. 1 Introduction Recently, increasing attention has been focused on visual captioning (Chen et al., 2015; Xu et al., 2016; Wang et al., 2018c), which aims at describing the content of an image or a video. Though it has achieved impressive results, its capability of performing human-like understanding is still restrictive. To further investigate machine’s capa∗Equal contribution 1https://github.com/littlekobe/AREL Story #1: The brother and sister were ready for the first day of school. They were excited to go to their first day and meet new friends. They told their mom how happy they were. They said they were going to make a lot of new friends . Then they got up and got ready to get in the car . Story #2: The brother did not want to talk to his sister. The siblings made up. They started to talk and smile. Their parents showed up. They were happy to see them. (a) (b) (c) (d) (e) Captions: (a) A small boy and a girl are sitting together. (b) Two kids sitting on a porch with their backpacks on. (c) Two young kids with backpacks sitting on the porch. (d) Two young children that are very close to one another. (e) A boy and a girl smiling at the camera together. Figure 1: An example of visual storytelling and visual captioning. Both captions and stories are shown here: each image is captioned with one sentence, and we also demonstrate two diversified stories that match the same image sequence. bilities in understanding more complicated visual scenarios and composing more structured expressions, visual storytelling (Huang et al., 2016) has been proposed. Visual captioning is aimed at depicting the concrete content of the images, and its expression style is rather simple. In contrast, visual storytelling goes one step further: it summarizes the idea of a photo stream and tells a story about it. Figure 1 shows an example of visual captioning and visual storytelling. We have observed that stories contain rich emotions (excited, happy, not want) and imagination (siblings, parents, school, car). It, therefore, requires the capability to associate with concepts that do not explicitly appear in the images. Moreover, stories are more subjective, so there barely exists standard 900 templates for storytelling. As shown in Figure 1, the same photo stream can be paired with diverse stories, different from each other. This heavily increases the evaluation difficulty. So far, prior work for visual storytelling (Huang et al., 2016; Yu et al., 2017b) is mainly inspired by the success of visual captioning. Nevertheless, because these methods are trained by maximizing the likelihood of the observed data pairs, they are restricted to generate simple and plain description with limited expressive patterns. In order to cope with the challenges and produce more human-like descriptions, Rennie et al. (2016) have proposed a reinforcement learning framework. However, in the scenario of visual storytelling, the common reinforced captioning methods are facing great challenges since the hand-crafted rewards based on string matches are either too biased or too sparse to drive the policy search. For instance, we used the METEOR (Banerjee and Lavie, 2005) score as the reward to reinforce our policy and found that though the METEOR score is significantly improved, the other scores are severely harmed. Here we showcase an adversarial example with an average METEOR score as high as 40.2: We had a great time to have a lot of the. They were to be a of the. They were to be in the. The and it were to be the. The, and it were to be the. Apparently, the machine is gaming the metrics. Conversely, when using some other metrics (e.g. BLEU, CIDEr) to evaluate the stories, we observe an opposite behavior: many relevant and coherent stories are receiving a very low score (nearly zero). In order to resolve the strong bias brought by the hand-coded evaluation metrics in RL training and produce more human-like stories, we propose an Adversarial REward Learning (AREL) framework for visual storytelling. We draw our inspiration from recent progress in inverse reinforcement learning (Ho and Ermon, 2016; Finn et al., 2016; Fu et al., 2017) and propose the AREL algorithm to learn a more intelligent reward function. Specifically, we first incorporate a Boltzmann distribution to associate reward learning with distribution approximation, then design the adversarial process with two models – a policy model and a reward model. The policy model performs the primitive actions and produces the story sequence, while the reward model is responsible for learning the implicit reward function from human demonstrations. The learned reward function would be employed to optimize the policy in return. For evaluation, we conduct both automatic metrics and human evaluation but observe a poor correlation between them. Particularly, our method gains slight performance boost over the baseline systems on automatic metrics; human evaluation, however, indicates significant performance boost. Thus we further discuss the limitations of the metrics and validate the superiority of our AREL method in performing more intelligent understanding of the visual scenes and generating more human-like stories. Our main contributions are four-fold: • We propose an adversarial reward learning framework and apply it to boost visual story generation. • We evaluate our approach on the Visual Storytelling (VIST) dataset and achieve the state-of-the-art results on automatic metrics. • We empirically demonstrate that automatic metrics are not perfect for either training or evaluation. • We design and perform a comprehensive human evaluation via Amazon Mechanical Turk, which demonstrates the superiority of the generated stories of our method on relevance, expressiveness, and concreteness. 2 Related Work Visual Storytelling Visual storytelling is the task of generating a narrative story from a photo stream, which requires a deeper understanding of the event flow in the stream. Park and Kim (2015) has done some pioneering research on storytelling. Chen et al. (2017) proposed a multimodal approach for storyline generation to produce a stream of entities instead of human-like descriptions. Recently, a more sophisticated dataset for visual storytelling (VIST) has been released to explore a more human-like understanding of grounded stories (Huang et al., 2016). Yu et al. (2017b) proposes a multi-task learning algorithm for both album summarization and paragraph generation, achieving the best results on the VIST dataset. But these methods are still based on behavioral cloning and lack the ability to generate more structured stories. 901 Reinforcement Learning in Sequence Generation Recently, reinforcement learning (RL) has gained its popularity in many sequence generation tasks such as machine translation (Bahdanau et al., 2016), visual captioning (Ren et al., 2017; Wang et al., 2018b), summarization (Paulus et al., 2017; Chen et al., 2018), etc. The common wisdom of using RL is to view generating a word as an action and aim at maximizing the expected return by optimizing its policy. As pointed in (Ranzato et al., 2015), traditional maximum likelihood algorithm is prone to exposure bias and label bias, while the RL agent exposes the generative model to its own distribution and thus can perform better. But these works usually utilize hand-crafted metric scores as the reward to optimize the model, which fails to learn more implicit semantics due to the limitations of automatic metrics. Rethinking Automatic Metrics Automatic metrics, including BLEU (Papineni et al., 2002), CIDEr (Vedantam et al., 2015), METEOR (Banerjee and Lavie, 2005), and ROUGE (Lin, 2004), have been widely applied to the sequence generation tasks. Using automatic metrics can ensure rapid prototyping and testing new models with fewer expensive human evaluation. However, they have been criticized to be biased and correlate poorly with human judgments, especially in many generative tasks like response generation (Lowe et al., 2017; Liu et al., 2016), dialogue system (Bruni and Fern´andez, 2017) and machine translation (Callison-Burch et al., 2006). The naive overlap-counting methods are not able to reflect many semantic properties in natural language, such as coherence, expressiveness, etc. Generative Adversarial Network Generative adversarial network (GAN) (Goodfellow et al., 2014) is a very popular approach for estimating intractable probabilities, which sidestep the difficulty by alternately training two models to play a min-max two-player game: min D max G E x∼pdata[log D(x)] + E z∼pz[log D(G(z))] , where G is the generator and D is the discriminator, and z is the latent variable. Recently, GAN has quickly been adopted to tackle discrete problems (Yu et al., 2017a; Dai et al., 2017; Wang et al., 2018a). The basic idea is to use Monte Carlo policy gradient estimation (Williams, 1992) to update the parameters of the generator. Adversarial Objective Reward Model Policy Model Environment Reward Inverse RL RL Images references Sampled Story Images Figure 2: AREL framework for visual storytelling. Inverse Reinforcement Learning Reinforcement learning is known to be hindered by the need for an extensive feature and reward engineering, especially under the unknown dynamics. Therefore, inverse reinforcement learning (IRL) has been proposed to infer expert’s reward function. Previous IRL approaches include maximum margin approaches (Abbeel and Ng, 2004; Ratliff et al., 2006) and probabilistic approaches (Ziebart, 2010; Ziebart et al., 2008). Recently, adversarial inverse reinforcement learning methods provide an efficient and scalable promise for automatic reward acquisition (Ho and Ermon, 2016; Finn et al., 2016; Fu et al., 2017; Henderson et al., 2017). These approaches utilize the connection between IRL and energy-based model and associate every data with a scalar energy value by using Boltzmann distribution pθ(x) ∝exp(−Eθ(x)). Inspired by these methods, we propose a practical AREL approach for visual storytelling to uncover a robust reward function from human demonstrations and thus help produce human-like stories. 3 Our Approach 3.1 Problem Statement Here we consider the task of visual storytelling, whose objective is to output a word sequence W = (w1, w1, · · · , wT ), wt ∈V given an input image stream of 5 ordered images I = (I1, I2, · · · , I5), where V is the vocabulary of all output token. We formulate the generation as a markov decision process and design a reinforcement learning framework to tackle it. As described in Figure 2, our AREL framework is mainly composed of two modules: a policy model πβ(W) and a reward model Rθ(W). The policy model takes an image sequence I as the input and performs sequential actions (choosing words w from the vocabulary V) to form a narrative story W. The reward model 902 CNN My brother recently graduated college. It was a formal cap and gown event. My mom and dad attended. Later, my aunt and grandma showed up. When the event was over he even got congratulated by the mascot. Encoder Decoder Figure 3: Overview of the policy model. The visual encoder is a bidirectional GRU, which encodes the high-level visual features extracted from the input images. Its outputs are then fed into the RNN decoders to generate sentences in parallel. Finally, we concatenate all the generated sentences as a full story. Note that the five decoders share the same weights. is optimized by the adversarial objective (see Section 3.3) and aims at deriving a human-like reward from both human-annotated stories and sampled predictions. 3.2 Model Policy Model As is shown in Figure 3, the policy model is a CNN-RNN architecture. We fist feed the photo stream I = (I1, · · · , I5) into a pretrained CNN and extract their high-level image features. We then employ a visual encoder to further encode the image features as context vectors hi = [←− hi; −→ hi]. The visual encoder is a bidirectional gated recurrent units (GRU). In the decoding stage, we feed each context vector hi into a GRU-RNN decoder to generate a substory Wi. Formally, the generation process can be written as: si t = GRU(si t−1, [wi t−1, hi]) , (1) πβ(wi t|wi 1:t−1) = softmax(Wssi t + bs) , (2) where si t denotes the t-th hidden state of i-th decoder. We concatenate the previous token wi t−1 and the context vector hi as the input. Ws and bs are the projection matrix and bias, which output a probability distribution over the whole vocabulary V. Eventually, the final story W is the concatenation of the sub-stories Wi. β denotes all the parameters of the encoder, the decoder, and the output layer. Story Convolution FC layer Pooling CNN my mom and dad attended . <EOS> + Reward Figure 4: Overview of the reward model. Our reward model is a CNN-based architecture, which utilizes convolution kernels with size 2, 3 and 4 to extract bigram, trigram and 4-gram representations from the input sequence embeddings. Once the sentence representation is learned, it will be concatenated with the visual representation of the input image, and then be fed into the final FC layer to obtain the reward. Reward Model The reward model Rθ(W) is a CNN-based architecture (see Figure 4). Instead of giving an overall score for the whole story, we apply the reward model to different story parts (substories) Wi and compute partial rewards, where i = 1, · · · , 5. We observe that the partial rewards are more fine-grained and can provide better guidance for the policy model. We first query the word embeddings of the substory (one sentence in most cases). Next, multiple convolutional layers with different kernel sizes are used to extract the n-grams features, which are then projected into the sentence-level representation space by pooling layers (the design here is inspired by Kim (2014)). In addition to the textual features, evaluating the quality of a story should also consider the image features for relevance. Therefore, we then combine the sentence representation with the visual feature of the input image through concatenation and feed them into the final fully connected decision layer. In the end, the reward model outputs an estimated reward value Rθ(W). The process can be written in formula: Rθ(W) = Wr(fconv(W) + WiICNN) + br, (3) where Wr, br denotes the weights in the output layer, and fconv denotes the operations in CNN. ICNN is the high-level visual feature extracted from the image, and Wi projects it into the sentence representation space. θ includes all the pa903 rameters above. 3.3 Learning Reward Boltzmann Distribution In order to associate story distribution with reward function, we apply EBM to define a Reward Boltzmann distribution: pθ(W) = exp(Rθ(W)) Zθ , (4) Where W is the word sequence of the story and pθ(W) is the approximate data distribution, and Zθ = P W exp(Rθ(W)) denotes the partition function. According to the energy-based model (LeCun et al., 2006), the optimal reward function R∗(W) is achieved when the Reward-Boltzmann distribution equals to the “real” data distribution pθ(W) = p∗(W). Adversarial Reward Learning We first introduce an empirical distribution pe(W) = 1(W∈D) |D| to represent the empirical distribution of the training data, where D denotes the dataset with |D| stories and 1 denotes an indicator function. We use this empirical distribution as the “good” examples, which provides the evidence for the reward function to learn from. In order to approximate the Reward Boltzmann distribution towards the “real” data distribution p∗(W), we design a min-max two-player game, where the Reward Boltzmann distribution pθ aims at maximizing the its similarity with empirical distribution pe while minimizing that with the “faked” data generated from policy model πβ. On the contrary, the policy distribution πβ tries to maximize its similarity with the Boltzmann distribution pθ. Formally, the adversarial objective function is defined as max β min θ KL(pe(W)||pθ(W)) −KL(πβ(W)||pθ(W)) . (5) We further decompose it into two parts. First, because the objective Jβ of the story generation policy is to minimize its similarity with the Boltzmann distribution pθ, the optimal policy that minimizes KL-divergence is thus π(W) ∼ exp(Rθ(W)), meaning if Rθ is optimal, the optimal πβ = π∗. In formula, Jβ = −KL(πβ(W)||pθ(W)) = E W ∼πβ(W )[Rθ(W)] + H(πβ(W)) , (6) Algorithm 1 The AREL Algorithm. 1: for episode ←1 to N do 2: collect story W by executing policy πθ 3: if Train-Reward then 4: θ ←θ −η × ∂Jθ ∂θ (see Equation 9) 5: else if Train-Policy then 6: collect story ˜W from empirical pe 7: β ←β −η × ∂Jβ ∂β (see Equation 9) 8: end if 9: end for where H denotes the entropy of the policy model. On the other hand, the objective Jθ of the reward function is to distinguish between humanannotated stories and machine-generated stories. Hence it is trying to minimize the KL-divergence with the empirical distribution pe and maximize the KL-divergence with the approximated policy distribution πβ: Jθ =KL(pe(W)||pθ(W)) −KL(πβ(W)||pθ(W)) = X W [pe(W)Rθ(W) −πβ(W)Rθ(W)] −H(pe) + H(πβ) , (7) Since H(πβ) and H(pe) are irrelevant to θ, we denote them as constant C. Therefore, the objective Jθ can be further derived as Jθ = E W ∼pe(W )[Rθ(W)] − E W ∼πβ(W )[Rθ(W)] + C . (8) Here we propose to use stochastic gradient descent to optimize these two models alternately. Formally, the gradients can be written as ∂Jθ ∂θ = E W ∼pe(W ) ∂Rθ(W) ∂θ − E W ∼πβ(W ) ∂Rθ(W) ∂θ , ∂Jβ ∂β = E W ∼πβ(W )(Rθ(W) + log πθ(W) −b)∂log πβ(W) ∂β , (9) where b is the estimated baseline to reduce the variance. Training & Testing As described in Algorithm 1, we introduce an alternating algorithm to train these two models using stochastic gradient descent. During testing, the policy model is used with beam search to produce the story. 4 Experiments and Analysis 4.1 Experimental Setup VIST Dataset The VIST dataset (Huang et al., 2016) is the first dataset for sequential vision-tolanguage tasks including visual storytelling, which 904 consists of 10,117 Flickr albums with 210,819 unique photos. In this paper, we mainly evaluate our AREL method on this dataset. After filtering the broken images2, there are 40,098 training, 4,988 validation, and 5,050 testing samples. Each sample contains one story that describes 5 selected images from a photo album (mostly one sentence per image). And the same album is paired with 5 different stories as references. In our experiments, we used the same split settings as in (Huang et al., 2016; Yu et al., 2017b) for a fair comparison. Evaluation Metrics In order to comprehensively evaluate our method on storytelling dataset, we adopted both the automatic metrics and human evaluation as our criterion. Four diverse automatic metrics were used in our experiments: BLEU, METEOR, ROUGE-L, and CIDEr. We utilized the open source evaluation code3 used in (Yu et al., 2017b). For human evaluation, we employed the Amazon Mechanical Turk to perform two kinds of user studies (see Section 4.3 for more details). Training Details We employ pretrained ResNet-152 model (He et al., 2016) to extract image features from the photo stream. We built a vocabulary of size 9,837 to include words appearing more than three times in the training set. More training details can be found at Appendix B. 4.2 Automatic Evaluation In this section, we compare our AREL method with the state-of-the-art methods as well as standard reinforcement learning algorithms on automatic evaluation metrics. Then we further discuss the limitations of the hand-crafted metrics on evaluating human-like stories. Comparison with SOTA on Automatic Metrics In Table 1, we compare our method with Huang et al. (2016) and Yu et al. (2017b), which report achieving best-known results on the VIST dataset. We first implement a strong baseline model (XEss), which share the same architecture with our policy model but is trained with cross-entropy loss and scheduled sampling. Besides, we adopt the traditional generative adversarial training for comparison (GAN). As shown in Table 1, our XEss model already outperforms the best-known re2There are only 3 (out of 21,075) broken images in the test set, which basically has no influence on the final results. Moreover, Yu et al. (2017b) also removed the 3 pictures, so it is a fair comparison. 3https://github.com/lichengunc/vist_eval Method B-1 B-2 B-3 B-4 M R C Huang et al. 31.4 Yu et al. 21.0 34.1 29.5 7.5 XE-ss 62.3 38.2 22.5 13.7 34.8 29.7 8.7 GAN 62.8 38.8 23.0 14.0 35.0 29.5 9.0 AREL-s-50 63.8 38.9 22.9 13.8 34.9 29.4 9.5 AREL-t-50 63.4 39.0 23.1 14.1 35.2 29.6 9.5 AREL-s-100 63.9 39.1 23.0 13.9 35.0 29.7 9.6 AREL-t-100 63.8 39.1 23.2 14.1 35.0 29.5 9.4 Table 1: Automatic evaluation on the VIST dataset. We report BLEU (B), METEOR (M), ROUGH-L (R), and CIDEr (C) scores of the SOTA systems and the models we implemented, including XE-ss, GAN and AREL. AREL-s-N denotes AREL models with sigmoid as output activation and alternate frequency as N, while ARELt-N denoting AREL models with tahn as the output activation (N = 50 or 100). sults on the VIST dataset, and the GAN model can bring a performance boost. We then use the XEss model to initialize our policy model and further train it with AREL. Evidently, our AREL model performs the best and achieves the new state-ofthe-art results across all metrics. But, compared with the XE-ss model, the performance gain is minor, especially on METEOR and ROUGE-L scores. However, in Sec. 4.3, the extensive human evaluation has indicated that our AREL framework brings a significant improvement on generating human-like stories over the XE-ss model. The inconsistency of automatic evaluation and human evaluation lead to a suspect that these hand-crafted metrics lack the ability to fully evaluate stories’ quality due to the complicated characteristics of the stories. Therefore, we conduct experiments to analyze and discuss the defects of the automatic metrics in section 4.2. Limitations of Automatic Metrics As we claimed in the introduction, string-match-based automatic metrics are not perfect and fail to evaluate some semantic characteristics of the stories, like the expressiveness and coherence of the stories. In order to confirm our conjecture, we utilize automatic metrics as rewards to reinforce the visual storytelling model by adopting policy gradient with baseline to train the policy model. The quantitative results are demonstrated in Table 1. Apparently, METEOR-RL and ROUGE-RL are severely ill-posed: they obtain the highest scores on their own metrics but damage the other met905 Method B-1 B-2 B-3 B-4 M R C XE-ss 62.3 38.2 22.5 13.7 34.8 29.7 8.7 BLEU-RL 62.1 38.0 22.6 13.9 34.6 29.0 8.9 METEOR-RL 68.1 35.0 15.4 6.8 40.2 30.0 1.2 ROUGE-RL 58.1 18.5 1.6 0 27.0 33.8 0 CIDEr-RL 61.9 37.8 22.5 13.8 34.9 29.7 8.1 AREL (avg) 63.7 39.0 23.1 14.0 35.0 29.6 9.5 Table 2: Comparison with different RL models with different metric scores as the rewards. We report the average scores of the AREL models as AREL (avg). Although METEOR-RL and ROUGE-RL models achieve very high scores on their own metrics, the underlined scores are severely damaged. Actually, they are gaming their own metrics with nonsense sentences. rics severely. We observe that these models are actually overfitting to a given metric while losing the overall coherence and semantical correctness. Same as METEOR score, there is also an adversarial example for ROUGE-L4, which is nonsense but achieves an average ROUGE-L score of 33.8. Besides, as can be seen in Table 1, after reinforced training, BLEU-RL and CIDEr-RL do not bring a consistent improvement over the XE-ss model. We plot the histogram distributions of both BLEU-3 and CIDEr scores on the test set in Figure 5. An interesting fact is that there are a large number of samples with nearly zero score on both metrics. However, we observed those “zero-score” samples are not pointless results; instead, lots of them make sense and deserve a better score than zero. Here is a “zero-score” example on BLEU-3: I had a great time at the restaurant today. The food was delicious. I had a lot of food. The food was delicious. T had a great time. The corresponding reference is The table of food was a pleasure to see! Our food is both nutritious and beautiful! Our chicken was especially tasty! We love greens as they taste great and are healthy! The fruit was a colorful display that tantalized our palette.. Although the prediction is not as good as the reference, it is actually coherent and relevant to the 4An adversarial example for ROUGE-L: we the was a . and to the . we the was a . and to the . we the was a . and to the . we the was a . and to the . we the was a . and to the . Method Win Lose Unsure XE-ss 22.4% 71.7% 5.9% BLEU-RL 23.4% 67.9% 8.7% CIDEr-RL 13.8% 80.3% 5.9% GAN 34.3% 60.5% 5.2% AREL 38.4% 54.2% 7.4% Table 3: Turing test results. theme “food and eating”, which showcases the defeats of using BLEU and CIDEr scores as a reward for RL training. Moreover, we compare the human evaluation scores with these two metric scores in Figure 5. Noticeably, both BLEU-3 and CIDEr have a poor correlation with the human evaluation scores. Their distributions are more biased and thus cannot fully reflect the quality of the generated stories. In terms of BLEU, it is extremely hard for machines to produce the exact 3-gram or 4-gram matching, so the scores are too low to provide useful guidance. CIDEr measures the similarity of a sentence to the majority of the references. However, the references to the same image sequence are photostream different from each other, so the score is very low and not suitable for this task. In contrast, our AREL framework can lean a more robust reward function from human-annotated stories, which is able to provide better guidance to the policy and thus improves its performances over different metrics. Comparison with GAN We here compare our method with traditional GAN (Goodfellow et al., 2014), the update rule for generator can be generally classified into two categories. We demonstrate their corresponding objectives and ours as follows: GAN1 : Jβ = E W∼pβ[−log Rθ(W)] , GAN2 : Jβ = E W∼pβ[log(1 −Rθ(W))] , ours : Jβ = E W∼pβ[−Rθ(W)] . As discussed in Arjovsky et al. (2017), GAN1 is prone to the unstable gradient issue and GAN2 is prone to the vanishing gradient issue. Analytically, our method does not suffer from these two common issues and thus is able converge to optimum solutions more easily. From Table 1, we can observe slight gains of using AREL over GAN 906 Figure 5: Metric score distributions. We plot the histogram distributions of BLEU-3 and CIDEr scores on the test set, as well as the human evaluation score distribution on the test samples. For a fair comparison, we use the Turing test results to calculate the human evaluation scores (see Section 4.3). Basically, 0.2 score is given if the generated story wins the Turing test, 0.1 for tie, and 0 if losing. Each sample has 5 scores from 5 judges, and we use the sum as the human evaluation score, so it is in the range [0, 1]. AREL vs XE-ss AREL vs BLEU-RL AREL vs CIDEr-RL AREL vs GAN Choice (%) AREL XE-ss Tie AREL BLEU-RL Tie AREL CIDEr-RL Tie AREL GAN Tie Relevance 61.7 25.1 13.2 55.8 27.9 16.3 56.1 28.2 15.7 52.9 35.8 11.3 Expressiveness 66.1 18.8 15.1 59.1 26.4 14.5 59.1 26.6 14.3 48.5 32.2 19.3 Concreteness 63.9 20.3 15.8 60.1 26.3 13.6 59.5 24.6 15.9 49.8 35.8 14.4 Table 4: Pairwise human comparisons. The results indicate the consistent superiority of our AREL model in generating more human-like stories than the SOTA methods. with automatic metrics, therefore we further deploy human evaluation for a better comparison. 4.3 Human Evaluation Automatic metrics cannot fully evaluate the capability of our AREL method. Therefore, we perform two different kinds of human evaluation studies on Amazon Mechanical Turk: Turing test and pairwise human evaluation. For both tasks, we use 150 stories (750 images) sampled from the test set, each assigned to 5 workers to eliminate human variance. We batch six items as one assignment and insert an additional assignment as a sanity check. Besides, the order of the options within each item is shuffled to make a fair comparison. Turing Test We first conduct five independent Turing tests for XE-ss, BLEU-RL, CIDErRL, GAN, and AREL models, during which the worker is given one human-annotated sample and one machine-generated sample, and needs to decide which is human-annotated. As shown in Table 3, our AREL model significantly outperforms all the other baseline models in the Turing test: it has much more chances to fool AMT worker (the ratio is AREL:XE-ss:BLEU-RL:CIDEr-RL:GAN = 45.8%:28.3%:32.1%:19.7%:39.5%), which confirms the superiority of our AREL framework in generating human-like stories. Unlike automatic metric evaluation, the Turing test has indicated a much larger margin between AREL and other competing algorithms. Thus, we empirically confirm that metrics are not perfect in evaluating many implicit semantic properties of natural language. Besides, the Turing test of our AREL model reveals that nearly half of the workers are fooled by our machine generation, indicating a preliminary success toward generating human-like stories. Pairwise Comparison In order to have a clear comparison with competing algorithms with respect to different semantic features of the stories, we further perform four pairwise comparison tests: AREL vs XE-ss/BLEU-RL/CIDErRL/GAN. For each photo stream, the worker is presented with two generated stories and asked to make decisions from the three aspects: relevance5, expressiveness6 and concreteness7. This head-tohead compete is designed to help us understand in what aspect our model outperforms the competing algorithms, which is displayed in Table 4. Consistently on all the three comparisons, a large majority of the AREL stories trumps the competing systems with respect to their relevance, 5Relevance: the story accurately describes what is happening in the image sequence and covers the main objects. 6Expressiveness: coherence, grammatically and semantically correct, no repetition, expressive language style. 7Concreteness: the story should narrate concretely what is in the image rather than giving very general descriptions. 907 XE-ss We took a trip to the mountains. There were many different kinds of different kinds. We had a great time. He was a great time. It was a beautiful day. AREL The family decided to take a trip to the countryside. There were so many different kinds of things to see. The family decided to go on a hike. I had a great time. At the end of the day, we were able to take a picture of the beautiful scenery. Humancreated Story We went on a hike yesterday. There were a lot of strange plants there. I had a great time. We drank a lot of water while we were hiking. The view was spectacular. Figure 6: Qualitative comparison example with XE-ss. The direct comparison votes (AREL:XE-ss:Tie) were 5:0:0 on Relevance, 4:0:1 on Expressiveness, and 5:0:0 on Concreteness. expressiveness, and concreteness. Therefore, it empirically confirms that our generated stories are more relevant to the image sequences, more coherent and concrete than the other algorithms, which however is not explicitly reflected by the automatic metric evaluation. 4.4 Qualitative Analysis Figure 6 gives a qualitative comparison example between AREL and XE-ss models. Looking at the individual sentences, it is obvious that our results are more grammatically and semantically correct. Then connecting the sentences together, we observe that the AREL story is more coherent and describes the photo stream more accurately. Thus, our AREL model significantly surpasses the XEss model on all the three aspects of the qualitative example. Besides, it won the Turing test (3 out 5 AMT workers think the AREL story is created by a human). In the appendix, we also show a negative case that fails the Turing test. 5 Conclusion In this paper, we not only introduce a novel adversarial reward learning algorithm to generate more human-like stories given image sequences, but also empirically analyze the limitations of the automatic metrics for story evaluation. We believe there are still lots of improvement space in the narrative paragraph generation tasks, like how to better simulate human imagination to create more vivid and diversified stories. Acknowledgment We thank Adobe Research for supporting our language and vision research. We would also like to thank Licheng Yu for clarifying the details of his paper and the anonymous reviewers for their thoughtful comments. This research was sponsored in part by the Army Research Laboratory under cooperative agreements W911NF09-20053. The views and conclusions contained herein are those of the authors and should not be interpreted as representing the official policies, either expressed or implied, of the Army Research Laboratory or the U.S. Government. The U.S. Government is authorized to reproduce and distribute reprints for Government purposes notwithstanding any copyright notice herein. References Pieter Abbeel and Andrew Y Ng. 2004. Apprenticeship learning via inverse reinforcement learning. In Proceedings of the twenty-first international conference on Machine learning, page 1. ACM. Martin Arjovsky, Soumith Chintala, and L´eon Bottou. 2017. Wasserstein gan. arXiv preprint arXiv:1701.07875. Dzmitry Bahdanau, Philemon Brakel, Kelvin Xu, Anirudh Goyal, Ryan Lowe, Joelle Pineau, Aaron Courville, and Yoshua Bengio. 2016. An actor-critic algorithm for sequence prediction. arXiv preprint arXiv:1607.07086. Satanjeev Banerjee and Alon Lavie. 2005. Meteor: An automatic metric for mt evaluation with improved correlation with human judgments. In Proceedings 908 of the acl workshop on intrinsic and extrinsic evaluation measures for machine translation and/or summarization, pages 65–72. Elia Bruni and Raquel Fern´andez. 2017. Adversarial evaluation for open-domain dialogue generation. In Proceedings of the 18th Annual SIGdial Meeting on Discourse and Dialogue, pages 284–288. Chris Callison-Burch, Miles Osborne, and Philipp Koehn. 2006. Re-evaluation the role of bleu in machine translation research. In 11th Conference of the European Chapter of the Association for Computational Linguistics. Wenhu Chen, Guanlin Li, Shuo Ren, Shujie Liu, Zhirui Zhang, Mu Li, and Ming Zhou. 2018. Generative bridging network in neural sequence prediction. In NAACL. Xinlei Chen, Hao Fang, Tsung-Yi Lin, Ramakrishna Vedantam, Saurabh Gupta, Piotr Doll´ar, and C Lawrence Zitnick. 2015. Microsoft coco captions: Data collection and evaluation server. arXiv preprint arXiv:1504.00325. Zhiqian Chen, Xuchao Zhang, Arnold P. Boedihardjo, Jing Dai, and Chang-Tien Lu. 2017. Multimodal storytelling via generative adversarial imitation learning. In Proceedings of the Twenty-Sixth International Joint Conference on Artificial Intelligence, IJCAI-17, pages 3967–3973. Bo Dai, Sanja Fidler, Raquel Urtasun, and Dahua Lin. 2017. Towards diverse and natural image descriptions via a conditional gan. In The IEEE International Conference on Computer Vision (ICCV). Chelsea Finn, Paul Christiano, Pieter Abbeel, and Sergey Levine. 2016. A connection between generative adversarial networks, inverse reinforcement learning, and energy-based models. arXiv preprint arXiv:1611.03852. Justin Fu, Katie Luo, and Sergey Levine. 2017. Learning robust rewards with adversarial inverse reinforcement learning. arXiv preprint arXiv:1710.11248. Ian Goodfellow, Jean Pouget-Abadie, Mehdi Mirza, Bing Xu, David Warde-Farley, Sherjil Ozair, Aaron Courville, and Yoshua Bengio. 2014. Generative adversarial nets. In Advances in neural information processing systems, pages 2672–2680. Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. 2016. Deep residual learning for image recognition. In Proceedings of the IEEE conference on computer vision and pattern recognition, pages 770– 778. Peter Henderson, Wei-Di Chang, Pierre-Luc Bacon, David Meger, Joelle Pineau, and Doina Precup. 2017. Optiongan: Learning joint reward-policy options using generative adversarial inverse reinforcement learning. arXiv preprint arXiv:1709.06683. Jonathan Ho and Stefano Ermon. 2016. Generative adversarial imitation learning. In Advances in Neural Information Processing Systems, pages 4565–4573. Ting-Hao K. Huang, Francis Ferraro, Nasrin Mostafazadeh, Ishan Misra, Jacob Devlin, Aishwarya Agrawal, Ross Girshick, Xiaodong He, Pushmeet Kohli, Dhruv Batra, et al. 2016. Visual storytelling. In 15th Annual Conference of the North American Chapter of the Association for Computational Linguistics (NAACL 2016). Yoon Kim. 2014. Convolutional neural networks for sentence classification. arXiv preprint arXiv:1408.5882. Yann LeCun, Sumit Chopra, Raia Hadsell, M Ranzato, and F Huang. 2006. A tutorial on energy-based learning. Predicting structured data, 1(0). Chin-Yew Lin. 2004. Rouge: A package for automatic evaluation of summaries. Text Summarization Branches Out. Chia-Wei Liu, Ryan Lowe, Iulian V Serban, Michael Noseworthy, Laurent Charlin, and Joelle Pineau. 2016. How not to evaluate your dialogue system: An empirical study of unsupervised evaluation metrics for dialogue response generation. arXiv preprint arXiv:1603.08023. Ryan Lowe, Michael Noseworthy, Iulian V Serban, Nicolas Angelard-Gontier, Yoshua Bengio, and Joelle Pineau. 2017. Towards an automatic turing test: Learning to evaluate dialogue responses. arXiv preprint arXiv:1708.07149. Kishore Papineni, Salim Roukos, Todd Ward, and WeiJing Zhu. 2002. Bleu: a method for automatic evaluation of machine translation. In Proceedings of the 40th annual meeting on association for computational linguistics, pages 311–318. Association for Computational Linguistics. Cesc C Park and Gunhee Kim. 2015. Expressing an image stream with a sequence of natural sentences. In Advances in Neural Information Processing Systems, pages 73–81. Romain Paulus, Caiming Xiong, and Richard Socher. 2017. A deep reinforced model for abstractive summarization. arXiv preprint arXiv:1705.04304. Marc’Aurelio Ranzato, Sumit Chopra, Michael Auli, and Wojciech Zaremba. 2015. Sequence level training with recurrent neural networks. arXiv preprint arXiv:1511.06732. Nathan D Ratliff, J Andrew Bagnell, and Martin A Zinkevich. 2006. Maximum margin planning. In Proceedings of the 23rd international conference on Machine learning, pages 729–736. ACM. Zhou Ren, Xiaoyu Wang, Ning Zhang, Xutao Lv, and Li-Jia Li. 2017. Deep reinforcement learning-based 909 image captioning with embedding reward. In Proceeding of IEEE conference on Computer Vision and Pattern Recognition (CVPR). Steven J Rennie, Etienne Marcheret, Youssef Mroueh, Jarret Ross, and Vaibhava Goel. 2016. Self-critical sequence training for image captioning. arXiv preprint arXiv:1612.00563. Ramakrishna Vedantam, C Lawrence Zitnick, and Devi Parikh. 2015. Cider: Consensus-based image description evaluation. In Proceedings of the IEEE conference on computer vision and pattern recognition, pages 4566–4575. Jing Wang, Jianlong Fu, Jinhui Tang, Zechao Li, and Tao Mei. 2018a. Show, reward and tell: Automatic generation of narrative paragraph from photo stream by adversarial training. AAAI. Xin Wang, Wenhu Chen, Jiawei Wu, Yuan-Fang Wang, and William Yang Wang. 2018b. Video captioning via hierarchical reinforcement learning. In The IEEE Conference on Computer Vision and Pattern Recognition (CVPR). Xin Wang, Yuan-Fang Wang, and William Yang Wang. 2018c. Watch, listen, and describe: Globally and locally aligned cross-modal attentions for video captioning. In Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies. Ronald J Williams. 1992. Simple statistical gradientfollowing algorithms for connectionist reinforcement learning. Machine learning, 8(3-4):229–256. Jun Xu, Tao Mei, Ting Yao, and Yong Rui. 2016. Msrvtt: A large video description dataset for bridging video and language. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR). Lantao Yu, Weinan Zhang, Jun Wang, and Yong Yu. 2017a. Seqgan: Sequence generative adversarial nets with policy gradient. In AAAI, pages 2852– 2858. Licheng Yu, Mohit Bansal, and Tamara Berg. 2017b. Hierarchically-attentive rnn for album summarization and storytelling. In Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing, pages 966–971, Copenhagen, Denmark. Association for Computational Linguistics. Brian D Ziebart. 2010. Modeling purposeful adaptive behavior with the principle of maximum causal entropy. Carnegie Mellon University. Brian D Ziebart, Andrew L Maas, J Andrew Bagnell, and Anind K Dey. 2008. Maximum entropy inverse reinforcement learning. In AAAI, volume 8, pages 1433–1438. Chicago, IL, USA.
2018
83
Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Long Papers), pages 910–921 Melbourne, Australia, July 15 - 20, 2018. c⃝2018 Association for Computational Linguistics 910 Bridging Languages through Images with Deep Partial Canonical Correlation Analysis Guy Rotman1, Ivan Vuli´c2 and Roi Reichart1 1 Faculty of Industrial Engineering and Management, Technion, IIT 2 Language Technology Lab, University of Cambridge [email protected] [email protected] [email protected] Abstract We present a deep neural network that leverages images to improve bilingual text embeddings. Relying on bilingual image tags and descriptions, our approach conditions text embedding induction on the shared visual information for both languages, producing highly correlated bilingual embeddings. In particular, we propose a novel model based on Partial Canonical Correlation Analysis (PCCA). While the original PCCA finds linear projections of two views in order to maximize their canonical correlation conditioned on a shared third variable, we introduce a non-linear Deep PCCA (DPCCA) model, and develop a new stochastic iterative algorithm for its optimization. We evaluate PCCA and DPCCA on multilingual word similarity and cross-lingual image description retrieval. Our models outperform a large variety of previous methods, despite not having access to any visual signal during test time inference.1 1 Introduction Research in multi-modal semantics deals with the grounding problem (Harnad, 1990), motivated by evidence that many semantic concepts, irrespective of the actual language, are grounded in the perceptual system (Barsalou and Wiemer-Hastings, 2005). In particular, recent studies have shown that performance on NLP tasks can be improved by joint modeling of text and vision, with multimodal and perceptually enhanced representation learning outperforming purely textual representa1Our code and data are available at: https://github. com/rotmanguy/DPCCA. tions (Feng and Lapata, 2010; Kiela and Bottou, 2014; Lazaridou et al., 2015). These findings are not surprising, and can be explained by the fact that humans understand language not only by its words, but also by their visual/perceptual context. The ability to connect vision and language has also enabled new tasks which require both visual and language understanding, such as visual question answering (Antol et al., 2015; Fukui et al., 2016; Xu and Saenko, 2016), image-to-text retrieval and text-to-image retrieval (Kiros et al., 2014; Mao et al., 2014), image caption generation (Farhadi et al., 2010; Mao et al., 2015; Vinyals et al., 2015; Xu et al., 2015), and visual sense disambiguation (Gella et al., 2016). While the main focus is still on monolingual settings, the fact that visual data can serve as a natural bridge between languages has sparked additional interest towards multilingual multi-modal modeling. Such models induce bilingual multi-modal spaces based on multi-view learning (Calixto et al., 2017; Gella et al., 2017; Rajendran et al., 2016). In this work, we propose a novel effective approach for learning bilingual text embeddings conditioned on shared visual information. This additional perceptual modality bridges the gap between languages and reveals latent connections between concepts in the multilingual setup. The shared visual information in our work takes the form of images with word-level tags or sentence-level descriptions assigned in more than one language. We propose a deep neural architecture termed Deep Partial Canonical Correlation Analysis (DPCCA) based on the Partial CCA (PCCA) method (Rao, 1969). To the best of our knowledge, PCCA has not been used in multilingual settings before. In short, PCCA is a variant of CCA which learns maximally correlated linear projections of two views (e.g., two language-specific “text-based views”) conditioned on a shared third view (e.g., 911 the “visual view”). We discuss the PCCA and DPCCA methods in §3 and show how they can be applied without having access to the shared images at test time inference. PCCA inherits one disadvantageous property from CCA: both methods compute estimates for covariance matrices based on all training data. This would prevent feasible training of their deep nonlinear variants, since deep neural nets (DNNs) are predominantly optimized via stochastic optimization algorithms. To resolve this major hindrance, we propose an effective optimization algorithm for DPCCA, inspired by the work of Wang et al. (2015b) on Deep CCA (DCCA) optimization. We evaluate our DPCCA architecture on two semantic tasks: 1) multilingual word similarity and 2) cross-lingual image description retrieval. For the former, we construct and provide to the community a new Word-Image-Word (WIW) dataset containing bilingual lexicons for three languages with shared images for 5K+ concepts. WIW is used as training data for word similarity experiments, while evaluation is conducted on the standard multilingual SimLex-999 dataset (Hill et al., 2015; Leviant and Reichart, 2015). The results reveal stable improvements over a large space of non-deep and deep CCA-style baselines in both tasks. Most importantly, 1) PCCA is overall better than other methods which do not use the additional perceptual view; 2) DPCCA outperforms PCCA, indicating the importance of nonlinear transformations modeled through DNNs; 3) DPCCA outscores DCCA, again verifying the importance of conditioning multilingual text embedding induction on the shared visual view; and 4) DPCCA outperforms two recent multi-modal bilingual models which also leverage visual information (Gella et al., 2017; Rajendran et al., 2016). 2 Related Work This work is related to two research threads: 1) multi-modal models that combine vision and language, with a focus on multilingual settings; 2) correlational multi-view models based on CCA which learn a shared vector space for multiple views. Multi-Modal Modeling in Multilingual Settings Research in cognitive science suggests that human meaning representations are grounded in our perceptual system and sensori-motor experience (Harnad, 1990; Lakoff and Johnson, 1999; Louwerse, 2011). Visual context serves as a useful crosslingual grounding signal (Bruni et al., 2014; Glavaˇs et al., 2017) due to its language invariance, even enabling the induction of word-level bilingual semantic spaces solely through tagged images obtained from the Web (Bergsma and Van Durme, 2011; Kiela et al., 2015). Vuli´c et al. (2016) combine text embeddings with visual features via simple techniques of concatenation and averaging to obtain bilingual multi-modal representations, with noted improvements over text-only embeddings on word similarity and bilingual lexicon extraction. However, similar to the monolingual model of Kiela and Bottou (2014), their models lack the training phase, and require the visual signal at test time. Recent work from Gella et al. (2017) exploits visual content as a bridge between multiple languages by optimizing a contrastive loss function. Furthermore, Rajendran et al. (2016) extend the work of Chandar et al. (2016) and propose to use a pivot representation in multimodal multilingual setups, with English representations serving as the pivot. While these works learn shared multimodal multilingual vector spaces, we demonstrate improved performance with our models (see §7). Finally, although not directly comparable, recent work in neural machine translation has constructed models that can translate image descriptions by additionally relying on visual features of the image provided (Calixto and Liu, 2017; Elliott et al., 2015; Hitschler et al., 2016; Huang et al., 2016; Nakayama and Nishida, 2017, inter alia). Correlational Models CCA-based techniques support multiple views on related data: e.g., when coupled with a bilingual dictionary, input monolingual word embeddings for two different languages can be seen as two views of the same latent semantic signal. Recently, CCA-based models for bilingual text embedding induction were proposed. These models rely on the basic CCA model (Chandar et al., 2016; Faruqui and Dyer, 2014), its deep variant (Lu et al., 2015), and a CCA extension which supports more than two views (Funaki and Nakayama, 2015; Rastogi et al., 2015). In this work, we propose to use (D)PCCA, which organically supports our setup: it conditions the two (textual) views on a shared (visual) view. CCA-based methods (including PCCA) require the estimation of covariance matrices over all training data (Kessy et al., 2017). This hinders the use of DNNs with these models, as DNNs are typically trained via stochastic optimization over mini912 batches on very large training sets. To address this limitation, various optimization methods for Deep CCA were proposed. Andrew et al. (2013) use L-BFGS (Byrd et al., 1995) over all training samples, while Arora and Livescu (2013) and Yan and Mikolajczyk (2015) train with large batches. However, these methods suffer from high memory complexity with unstable numerical computations. Wang et al. (2015b) have recently proposed a stochastic approach for CCA and DCCA which copes well with small and large batch sizes while preserving high model performance. They use orthogonal iterations to estimate a moving average of the covariance matrices, which improves memory consumption. Therefore, we base our novel optimization algorithm for DPCCA on this approach. 3 Methodology: Deep Partial CCA Given two image descriptions x and y in two languages and an image z that they refer to, the task is to learn a shared bilingual space such that similar descriptions obtain similar representations in the induced space. The image z serves as a shared third view on the textual data during training. The representation model is then utilized in cross-lingual and monolingual tasks. In this paper we focus on the more realistic scenario where no relevant visual content is available at test time. For this goal we propose a novel Deep Partial CCA (DPCCA) framework. In what follows, we first review the CCA model and its deep variant: DCCA. We then introduce our DPCCA architecture, and describe our new stochastic optimization algorithm for DPCCA. 3.1 CCA and Deep CCA DCCA (Andrew et al., 2013) extends CCA by learning non-linear (instead of linear) transformations of features contained in the input matrices X ∈RDx×N and Y ∈RDy×N, where Dx and Dy are input vector dimensionalities, and N is the number of input items. Since CCA is a special case of the non-linear DCCA (see below), we here briefly outline the more general DCCA model. The DCCA architecture is illustrated in Figure 1a. Non-linear transformations are achieved through two DNNs f : RDx×N →RD′ x×N and g : RDy×N →RD′ y×N for X and Y . D′ x and D′ y are the output dimensionalities. A final linear layer is added to resemble the linear CCA projection. The goal is to project the features of X and Y into a shared L-dimensional (1 ≤ L ≤ min(D′ x, D′ y)) space such that the canonical correlation of the final outputs F (X) = W T f(X) and G(Y ) = V T g(Y ) is maximized. W ∈ RD′ x×L and V ∈RD′ y×L are projection matrices: they project the final outputs of the DNNs to the shared space. Wf and Vg (the parameters of f and g) and the projection matrices are the model parameters: WF = {Wf, W }; VG = {Vg, V }.2 Formally, the DCCA objective can be written as: max WF ,VG Tr(ˆΣF G) so that ˆΣF F = ˆΣGG = I. (1) ˆΣF G ≡ 1 N−1F (X)G(Y )T is the estimation of the cross-covariance matrix of the outputs, and ˆΣF F ≡ 1 N−1F (X)F (X)T , ˆΣGG ≡ 1 N−1G(Y )G(Y )T are the estimations of the autocovariance matrices of the outputs.3 Further, following Wang et al. (2015b), the optimal solution of Eq. (1) is equivalent to the optimal solution of the following: min WF ,VG 1 N −1∥F (X) −G(Y )∥2 F s.t. ˆΣF F = ˆΣGG = I. (2) The main disadvantage of DCCA is its inability to support more than two views, and to learn conditioned on an additional shared view, which is why we introduce Deep Partial CCA. 3.2 New Model: Deep Partial CCA Figure 1b illustrates the architecture of DPCCA. The training data now consists of triplets (xi, yi, zi)N 1=1 from three views, forming the columns of X, Y and Z, where xi ∈RDx, yi ∈ RDy, zi ∈RDz for i = 1, . . . , N. The objective is to maximize the canonical correlation of the first two views X and Y conditioned on the shared third variable Z. Following Rao (1969)’s work on Partial CCA, we first consider two multivariate linear multiple regression models: F (X) = AZ + F (X|Z), (3) G(Y ) = BZ + G(Y |Z). (4) 2For notational simplicity, we assume f(X) and g(Y ) to have zero-means, otherwise it is possible to centralize them at the final layer of each network to the same effect. 3The CCA model can be seen as a special (linear) case of the more general DCCA model. The basic CCA objective can be recovered from the DCCA objective by simply setting D′ x = Dx, D′ y = Dy and f(X) = idX, g(Y ) = idY ; id is the identity mapping. 913 (a) (b) Figure 1: DCCA and DPCCA architectures. (a): DCCA. X and Y (English and German image descriptions) are fed through two identical deep feed-forward neural networks followed by a final linear layer. The final nodes of the networks F (X) and G(Y ) are then maximally correlated via the CCA objective. (b): DPCCA. In addition, a third (shared) variable Z (an image) is either optimized via an identical architecture of the two main views (DPCCA Variant B, illustrated here) or kept fixed (DPCCA Variant A). The final nodes of the networks F (X) and G(Y ) are maximally correlated conditioned on the final node in the middle network H(Z) (or directly on the input node Z in DPCCA Variant A). A, B ∈RL×Dz are matrices of coefficients, and F (X|Z), G(Y |Z) ∈RL×N are normal random error matrices: residuals. We then minimize the mean-squared error regression criterion: min A 1 N −1∥F (X) −AZ∥2 F , (5) min B 1 N −1∥G(Y ) −BZ∥2 F . (6) After obtaining the optimal solutions for the coefficients, ˆA and ˆB, the residuals are as follows: F (X|Z) = F (X) −ˆAZ = F (X) −ˆΣF Z ˆΣ−1 ZZZ. (7) G(Y |Z) is computed in the analogous manner, now relying on G(Y ) and ˆBZ. ˆΣS′Z ≡ 1 N−1SZT refers to the covariance matrix estimator of S′ and Z, where (S′, S) ∈ {(F , F (X)), (G, G(Y )), (Z, Z)}.4 The canonical correlation between the residual matrices F (X|Z) and G(Y |Z) is referred to as the partial canonical correlation. The Deep PCCA objective can be obtained by replacing F (X) and G(Y ) with their residuals in Eq. (2): min WF ,VG 1 N −1∥F (X|Z) −G(Y |Z)∥2 F s.t. ˆΣF F |Z = ˆΣGG|Z = I. (8) The computation of the conditional covariance matrix ˆΣF F |Z can be formulated as follows: ˆΣF F |Z ≡ 1 N −1F (X|Z)F (X|Z)T = ˆΣF F −ˆΣF Z ˆΣ−1 ZZ ˆΣT F Z. (9) 4A small value ϵ > 0 is added to the main diagonal of the covariance estimators for numerical stability. The other conditional covariance matrix ˆΣGG|Z is again computed in the analogous manner, replacing F with G and X with Y .5 While the (D)PCCA objective is computed over the residuals, after the network is trained (using multilingual texts and corresponding images) we can compute the representations of F (X) and G(Y ) at test time without having access to images (see the network structure in Figure 1b). This heuristic enables the use of DPCCA in a real-life scenario in which images are unavailable at test time, and its encouraging results are demonstrated in §7. Model Variants We consider two DPCCA variants : 1) in DPCCA Variant A, the shared view Z is kept fixed; 2) DPCCA Variant B also optimizes over Z, as illustrated in Figure 1b. Variant A may be seen as a special case of Variant B.6 Variant B learns a non-linear function of the shared variable, H(Z) = U T h(Z), during training, where h : RDz×N →RDz′×N is a DNN having the same architecture as f and g. U ∈RDz′×L is the final linear layer of H, such that overall, the additional parameters of the model are UH = {Uh, U}. Instead of assuming a linear connection between F (X) and G(Y ) to Z, as in Variant A, we now assume that the linear connection takes place with H(Z). This assumption 5The original PCCA objective can be recovered by setting D′ x = Dx, D′ y = Dy and f(X) = idX, g(Y ) = idY . 6For Variant A, in order for Z to be on the same range of values as in F and G, we pass it through the activation function of the network, Z = σ(Z). Due to space constraints we discuss DPCCA Variant A in the supplementary material only. 914 changes Eq. (3) and Eq. (4) to:7 F (X) = A′ · H(Z) + F (X|H(Z)), (10) G(Y ) = B′ · H(Z) + G(Y |H(Z)). (11) 4 DPCCA: Optimization Algorithm Training deep variants of CCA-style multi-view models is non-trivial due to estimation on the entire training set related to whitening constraints (i.e., the orthogonality of covariance matrices). To overcome this issue, Wang et al. (2015b) proposed a stochastic optimization algorithm for DCCA via non-linear orthogonal iterations (DCCA NOI). Relying on the solution for DCCA (§4.1), we develop a new optimization algorithm for DPCCA in §4.2. 4.1 Optimization of DCCA The DCCA optimization from Wang et al. (2015b), fully provided in Algorithm 1, relies on three key steps. First, the estimation of the covariance matrices in the form of ˆΣF F t at time t is calculated by a moving average over the minibatches: ˆΣF F t ←ρˆΣF F t−1 + (1 −ρ) |bt| N −1 −1F (Xbt)F (Xbt)T . (12) bt is the minibatch at time t, Xbt is the current input matrix at time t, and ρ ∈[0, 1] controls the ratio between the overall covariance estimation and the covariance estimation of the current minibatch.8 This step eliminates the need of estimating the covariances over all training data, as well as the inherent bias when the estimate relies only on the current minibatch. Second, the DCCA NOI algorithm forces the whitening constraints to hold by performing an explicit matrix transformation in the form of: ^ F (Xbt) = ˆΣ −1 2 F FtF (Xbt). (13) According to Horn et al. (1988), if ρ = 0:  |bt| N −1 −1 ^ F (Xbt) ^ F (Xbt) T = I. (14) Finally, in order to optimize the DCCA objective (see Eq. (2)), the weights of the two DNNs are decoupled: i.e., the objective is disassembled into two separate mean-squared error objectives. Instead of 7Note that the matrices of coefficients A′ , B′ ∈RL×L. 8Setting ρ to a high value indicates slow updates of the estimator; setting it low mostly erases the overall estimation and relies more on the current minibatch estimation. Algorithm 1 The non-linear orthogonal iterations (NOI) algorithm for DCCA (DCCA NOI) Input: Data matrices X ∈RDx×N, Y ∈RDy×N, time constant ρ, learning rate η. initialization: Initialize weights (WF , VG). Randomly choose a minibatch (Xb0, Yb0). Initialize covariances: ˆΣF F ←N−1 |b0| F (Xb0)F (Xb0)T ˆΣGG ←N−1 |b0| G(Yb0)G(Yb0)T for t = 1, 2, . . . , n do Randomly choose a minibatch (Xbt, Ybt). Update covariances: ˆΣF F ←ρˆΣF F + (1 −ρ) N−1 |bt| F (Xbt)F (Xbt)T ˆΣGG ←ρˆΣGG + (1 −ρ) N−1 |bt| G(Ybt)G(Ybt)T Fix ^ G(Ybt) = ˆΣ −1 2 GGG(Ybt), and compute ∇WF with respect to: min WF 1 |bt|∥F (Xbt) −^ G(Ybt)∥2 F Update parameters: WF ←WF −η∇WF Fix ^ F (Xbt) = ˆΣ −1 2 F F F (Xbt), and compute ∇VG with respect to: min VG 1 |bt|∥G(Ybt) − ^ F (Xbt)∥2 F Update parameters: VG ←VG −η∇VG end for Output: (WF , VG) trying to bring F (Xbt) and G(Ybt) closer in one gradient descent step, two steps are performed: one of the views is fixed, and a gradient step over the other is performed, and so on, iteratively. The final objective functions at each time step are: min WF 1 |bt|∥F (Xbt) −^ G(Ybt)∥2 F , (15) min VG 1 |bt|∥G(Ybt) − ^ F (Xbt)∥2 F . (16) Wang et al. (2015b) show that the projection matrices W and V converge to the exact solutions of CCA as t→∞when considering linear CCA. 4.2 Optimization of DPCCA Our DPCCA optimization is based on the DCCA NOI algorithm with several adjustments. Besides the requirement to obtain the sample covariances ˆΣF F and ˆΣGG, when calculating the conditional variables F (X|Z), G(Y |Z), ˆΣF F |Z and ˆΣGG|Z, we additionally have to obtain the stochastic estimators ˆΣF Z, ˆΣGZ and ˆΣZZ. To this end, we use the moving average estimation from Eq. (12). Next, we define the whitening transformation on the residuals: 915 ^ F (Xbt|Zbt) = ˆΣ −1 2 F Ft|ZF (Xbt|Zbt), (17) ^ G(Ybt|Zbt) = Σ −1 2 GGt|ZG(Ybt|Zbt). (18) As before, the whitening constraints hold when ρ = 0. From here, we derive our two final objective functions over the residuals at time t: min WF 1 |bt|∥F (Xbt|Zbt) − ^ G(Ybt|Zbt)∥2 F , (19) min VG 1 |bt|∥G(Ybt|Zbt) − ^ F (Xbt|Zbt)∥2 F . (20) Equivalently to Eq. (15)-(16) that replace Eq. (2), Eq. (19)-(20) replace Eq. (8) by performing stochastic, decoupled and unconstrained steps. As our algorithm performs CCA over the residuals, we gain the same guarantees as Wang et al. (2015b), now for the projection matrices of the residuals. Algorithm 2 shows the full optimization procedure for the more complex DPCCA Variant B. The full algorithm for Variant A is provided in the supplementary material. The main difference is that with Variant B we replace Z with H(Z) in all equations where it appears, and we optimize over UH along with WF and VG in Eq. (19) and Eq. (20), respectively. 5 Tasks and Data Cross-lingual Image Description Retrieval The cross-lingual image description retrieval task is formulated as follows: taking an image description as a query in the source language, the system has to retrieve a set of relevant descriptions in the target language which describe the same image. Our evaluation assumes a single-best scenario, where only a single target description is relevant for each query. In addition, in our setup, images are not available during inference: retrieval is performed based solely on text queries. This enables a fair comparison between our model and many baseline models that cannot represent images and text in a shared space. Moreover, it allows us to test our model in the realistic setup where images are not available at test time. To avoid the use of images at retrieval time with DPCCA, we perform the retrieval on F (X) and G(Y ), rather than on F (X|Z) and G(Y |Z) (see §3.2). We use the Multi30K dataset (Elliott et al., 2016), originated from Flickr30K (Young et al., 2014) that is comprised of Flicker images described with 1-5 English descriptions per image. Multi30K adds Algorithm 2 The non-linear orthogonal iterations (NOI) algorithm for DPCCA Variant B Input: Data matrices X ∈ RDx×N, Y ∈ RDy×N, Z ∈RDz×N, time constant ρ, learning rate η. initialization: Initialize weights (WF , VG, UH). Randomly choose a minibatch (Xb0, Yb0, Zb0). Initialize covariances: ˆΣF F ←N−1 |b0| F (Xb0)F (Xb0)T ˆΣGG ←N−1 |b0| G(Yb0)G(Yb0)T ˆΣHH ←N−1 |b0| H(Zb0)H(Zb0)T ˆΣF H ←N−1 |b0| F (Xb0)H(Zb0)T ˆΣGH ←N−1 |b0| G(Yb0)H(Zb0)T for t = 1, 2, . . . , n do Randomly choose a minibatch (Xbt, Ybt, Zbt). Update covariances: ˆΣF F ←ρˆΣF F + (1 −ρ) N−1 |bt| F (Xbt)F (Xbt)T ˆΣGG ←ρˆΣGG + (1 −ρ) N−1 |bt| G(Ybt)G(Ybt)T ˆΣHH ←ρˆΣHH + (1 −ρ) N−1 |bt| H(Zbt)H(Zbt)T ˆΣF H ←ρˆΣF H + (1 −ρ) N−1 |bt| F (Xbt)H(Zbt)T ˆΣGH ←ρˆΣGH + (1 −ρ) N−1 |bt| G(Ybt)H(Zbt)T Update conditional variables: F |H ←F (Xbt) −ˆΣF H ˆΣ−1 HHH(Zbt) G|H ←G(Ybt) −ˆΣGH ˆΣ−1 HHH(Zbt) ˆΣF F |H ←ˆΣF F −ˆΣF H ˆΣ−1 HH ˆΣT F H ˆΣGG|H ←ˆΣGG −ˆΣGH ˆΣ−1 HH ˆΣT GH Fix ] G|H = ˆΣ −1 2 GG|HG|H, and compute ∇WF , ∇UH with respect to: min WF ,UH 1 |bt|∥F |H −] G|H∥2 F Update parameters: WF ←WF −η∇WF , UH ←UH −η∇UH Fix ] F |H = ˆΣ −1 2 F F |HF |H, and compute ∇VG, ∇UH with respect to: min VG,UH 1 |bt|∥G|H −] F |H∥2 F Update parameters: VG ←VG −η∇VG, UH ←UH −η∇UH end for Output: (WF , VG, UH) German descriptions to a total of 30,014 images: most were written independently of the English descriptions, while some are direct translations. Each image is associated with one English and one German description. We rely on the original Multi30K splits with 29,000, 1,014, and 1,000 triplets for training, validation, and test, respectively. Multilingual Word Similarity The word similarity task tests the correlation between automatic and human generated word similarity scores. We evaluate with the Multilingual SimLex-999 dataset (Leviant and Reichart, 2015): the 999 English (EN) 916 EN-DE EN-IT EN-RU Nouns 4606 4735 4106 Adjectives 405 416 348 Verbs 392 400 227 Adverbs 167 161 142 Prepositions 12 12 9 Total 5598 5740 4838 Table 1: WIW statistics: the number of WIW entries across POS classes in each language pair. The numbers of words per POS class are not summed to the total number of words as other (less frequent) POS tags are also represented. word pairs from SimLex-999 (Hill et al., 2015) were translated to German (DE), Italian (IT), and Russian (RU), and similarity scores were crowdsourced from native speakers. We introduce a new dataset termed Word-ImageWord (WIW), which we use to train word-level models for the multilingual word similarity task. WIW contains three bilingual lexicons (EN-DE, EN-IT, EN-RU) with images shared between words in a lexicon entry. Each WIW entry is a triplet: an English word, its translation in DE/IT/RU, and a set of images relevant to the pair. English words were taken from the January 2017 Wikipedia dump. After removing stop words and punctuation, we extract the 6,000 most frequent words from the cleaned corpus not present in SimLex. DE/IT/RU words were obtained semiautomatically from the EN words using Google Translate. The images are crawled from the Bing search engine using MMFeat9 (Kiela, 2016) by querying the EN words only. Following the suggestions from the study of Kiela et al. (2016), we save the top 20 images as relevant images.10 Table 1 provides a summary of the WIW dataset. The dataset contains both concrete and abstract words, and words of different POS tags.11 This property has an influence on the image collection: similar to Kiela et al. (2014), we have noticed that images of more concrete concepts are less dispersed (see also examples from Figure 2). 6 Experimental Setup Data Preprocessing and Embeddings For the sentence-level task, all descriptions were lower9https://github.com/douwekiela/mmfeat. 10Offensive words and images are manually cleaned. 11POS tag information is taken from the NLTK toolkit for the English words. Figure 2: WIW examples from each of the three bilingual lexicons. Note that the designated words can be either abstract (true), express an action (dance) or be more concrete (plant). cased and tokenized. Each sentence is represented with one vector: the average of its word embeddings. For English, we rely on 500-dimensional English skip-gram word embeddings (Mikolov et al., 2013) trained on the January 2017 Wikipedia dump with bag-of-words contexts (window size of 5). For German we use the deWaC 1.7B corpus (Baroni et al., 2009) to obtain 500-dimensional German embeddings using the same word embedding model. For word similarity, to be directly comparable to previous work, we rely on 300-dim word vectors in EN, DE, IT, and RU from Mrkˇsi´c et al. (2017). Visual features are extracted from the penultimate layer (FC7) of the VGG-19 network (Simonyan and Zisserman, 2015), and compressed to the dimensionality of the textual inputs by a Principal Component Analysis (PCA) step. For the word similarity task, we average the visual vectors across all images of each word pair as done in, e.g., (Vuli´c et al., 2016), before the PCA step. Baseline Models We consider a wide variety of multi-view CCA-based baselines. First, we compare against the original (linear) CCA model (Hotelling, 1936), and its deep non-linear extension DCCA (Andrew et al., 2013). For DCCA: 1) we rely on its improved optimization algorithm from Wang et al. (2015a) which uses a stochastic approach with large minibatches; 2) we compare against the DCCA NOI variant (Wang et al., 2015b) described by Algorithm 1, and another recent DCCA variant with the optimization algorithm based on a stochastic decorrelational loss (Chang et al., 2017) (DCCA SDL); and 3) we also test the DCCA Autoencoder model (DCCAE) (Wang et al., 2015a), which offers a trade-off between maximizing the canonical correlation of two sets of variables and finding informative features for their reconstruction. Another baseline is Generalized CCA (GCCA) (Funaki and Nakayama, 2015; Horst, 1961; Rastogi et al., 2015): a linear model which extends CCA to 917 three or more views. Unlike PCCA, GCCA does not condition two variables on the third shared one, but rather seeks to maximize the canonical correlations of all pairs of views. We also compare to Nonparametric CCA (NCCA) (Michaeli et al., 2016), and to a probabilistic variant of PCCA (PPCCA, Mukuta and Harada (2014)). Finally, we compare with the two recent models which operate in the setup most similar to ours: 1) Bridge Correlational Networks (BCN) (Rajendran et al., 2016); and 2) Image Pivoting (IMG PIVOT) from Gella et al. (2017). For both models, we report results only with the strongest variant based on the findings from the original papers, also verified by additional experimentation in our work.12 Hyperparameter Tuning The hyperparameters of the different models are tuned with a grid search over the following values: {2,3,4,5} for number of layers, {tanh, sigmoid, ReLU} as the activation functions (we use the same activation function in all the layers of the same network), {64,128,256} for minibatch size, {0.001,0.0001} for learning rate, and {128,256} for L (the size of the output vectors). The dimensions of all mid-layers are set to the input size. We use the Adam optimizer (Kingma and Ba, 2015), with the number of epochs set to 300. For all participating models, we report test performance of the best hyperparameter on the validation set. For word similarity, following a standard practice (Levy et al., 2015; Vuli´c et al., 2017) we tune all models on one half of the SimLex data and evaluate on the other half, and vice versa. The reported score is the average of the two halves. Similarity scores for all tasks were computed using the cosine similarity measure. 7 Results and Discussion Cross-lingual Image Description Retrieval We report two standard evaluation metrics: 1) Recall at 1 (R@1) scores, and 2) the sentence-level BLEU+1 metric (Lin and Och, 2004), a variant of BLEU which smooths terms for higher-order n-grams, making it more suitable for evaluating short sentences. The scores for the retrieval task with all models are summarized in Table 2. 12 More details about preprocessing and baselines (including all links to their code), are in the the supplementary material. We use original readily available implementations of all baselines whenever this is possible, and our in-house implementations for baselines for which no code is provided by the original authors. R@1 BLEU+1 Model EN→DE DE→EN EN→DE DE→EN DPCCA (Variant A) 0.795 0.779 0.836 0.827 DPCCA (Variant B) 0.809 0.794 0.848 0.839 DPCCA(B)+DCCA NOI (concat) 0.826 0.791 0.863 0.837 DCCA NOI (Wang et al., 2015b) 0.812 0.788 0.849 0.830 DCCA SDL (Chang et al., 2017) 0.507 0.487 0.552 0.533 DCCA (Wang et al., 2015a) 0.619 0.621 0.664 0.673 DCCAE (Wang et al., 2015a) 0.564 0.542 0.607 0.598 IMG PIVOT (Gella et al., 2017) 0.772 0.763 0.789 0.781 BCN (Rajendran et al., 2016) 0.579 0.570 0.628 0.629 PCCA (Rao, 1969) 0.785 0.737 0.825 0.787 CCA (Hotelling, 1936) 0.764 0.704 0.803 0.754 GCCA (Funaki and Nakayama, 2015) 0.699 0.690 0.742 0.743 NCCA (Michaeli et al., 2016) 0.157 0.165 0.205 0.213 PPCCA (Mukuta and Harada, 2014) 0.035 0.050 0.063 0.086 Table 2: Results on cross-lingual image description retrieval. NN-based models are above the dashed line. Best overall results are in bold. Best results with non-deep models are underlined. The results clearly demonstrate the superiority of DPCCA (with a slight advantage to the more complex Variant B) and of the concatenation of their representation with that of the DCCA NOI (strongest) baseline. Furthermore, the non-deep, linear PCCA achieves strong results: it outscores all non-deep models, as well as all deep models except from DCCA NOI, IMG PIVOT in one case, and its deep version: DPCCA. This emphasizes our contribution in proposing PCCA for multilingual processing with images as a cross-lingual bridge. The results suggest that: 1) the inclusion of visual information in the training process helps the retrieval task even without such information during inference. DPCCA outscores all DCCA variants (either alone or through a concatenation with the DCCA NOI representation), and PCCA outscores the original two-view CCA model; and 2) deep, non-linear architectures are useful: our DPCCA outperforms the linear PCCA model. We also note clear improvements over the two recent models which also rely on visual information: IMG PIVOT and BCN. The gain over IMG PIVOT is observed despite the fact that IMG PIVOT is a more complex multi-modal model which relies on RNNs, and is tailored to sentence-level tasks. Finally, the scores from Table 2 suggest that improved performance can be achieved by an ensemble model, that is, a simple concatenation of DPCCA (B) and DCCA NOI. Multilingual Word Similarity The results, presented as standard Spearman’s rank correlation scores, are summarized in Table 3: we present fine-grained results over different POS classes for EN and DE, and compare them to the results from 918 English-German Model EN-Adj EN-Verbs EN-Nouns DE-Adj DE-Verbs DE-Nouns DPCCA (Variant A) 0.640 0.311 0.369 0.430 0.321 0.404 DPCCA (Variant B) 0.626 0.316 0.382 0.462 0.319 0.399 DCCA NOI (Wang et al., 2015b) 0.611 0.308 0.361 0.441 0.297 0.398 DCCA (Wang et al., 2015a) 0.618 0.261 0.327 0.404 0.290 0.362 PCCA (Rao, 1969) 0.614 0.296 0.340 0.305 0.143 0.340 CCA (Hotelling, 1936) 0.557 0.297 0.321 0.284 0.157 0.346 GCCA (Funaki and Nakayama, 2015) 0.636 0.280 0.378 0.446 0.277 0.398 INIT EMB 0.582 0.160 0.306 0.407 0.164 0.285 Table 3: Results on EN and DE SimLex-999 (POS-based evaluation). All scores are Spearman’s rank correlations. INIT EMB refers to initial pre-trained monolingual word embeddings (see §6). EN-DE WIW EN-IT WIW EN-RU WIW Model EN DE EN IT EN RU DPCCA (A) 0.398 0.400 0.412 0.429 0.404 0.407 DPCCA (B) 0.405 0.400 0.413 0.427 0.413 0.402 PCCA 0.374 0.301 0.370 0.386 0.374 0.374 DCCA NOI 0.390 0.398 0.413 0.422 0.407 0.398 GCCA 0.395 0.386 0.414 0.407 0.412 0.396 INIT EMB 0.321 0.278 0.321 0.361 0.321 0.385 Table 4: Results (Spearman rank correlation) of our models and the strongest baselines on Multilingual SimLex-999 (all data). a selection of strongest baselines. Further, Table 4 presents results on all SimLex word pairs. The POS class result patterns for EN-IT and EN-RU are very similar to the patterns in Table 3 and are provided in the supplementary material. First, the results over the initial monolingual embeddings before training (INIT EMB) clearly indicate that multilingual information is beneficial for the word similarity task. We observe improvements with all models (the only exception being extremely lowscoring PPCCA and NCCA, not shown). Moreover, by additionally grounding concepts from two languages in the visual modality it is possible to further boost word similarity scores. This result is in line with prior work in monolingual settings (Chrupała et al., 2015; Kiela and Bottou, 2014; Lazaridou et al., 2015), which have shown to profit from multi-modal features. The results on the POS classes represented in SimLex-999 (nouns, verbs, adjectives, Table 3) form our main finding: conditioning the multilingual representations on a shared image leads to improvements in verb and adjective representations. While for nouns one of the DPCCA variants is the best performing model for both languages, the gaps from the best performing baselines are much smaller. This is interesting since, e.g., verbs are more abstract than nouns (Hartmann and Søgaard, 2017; Hill et al., 2014). Considering the fact that SimLex-999 consists of 666 noun pairs, 222 verb pairs and 111 adjective pairs, this is the reason that the gains of DPCCA over the strongest baselines across the entire evaluation set are more modest (Table 4). We note again that the same patterns presented in Table 3 for EN-DE – more prominent verb and adjective gains and a smaller gain on nouns – also hold for EN-IT and EN-RU (see the supplementary material). 8 Conclusion and Future Work We addressed the problem of utilizing images as a bridge between languages to learn improved bilingual text representations. Our main contribution is two-fold. First, we proposed to use the Partial CCA (PCCA) method. In addition, we proposed a stochastic optimization algorithm for the deep version of PCCA that overcomes the challenges posed by the covariance estimation required by the method. Our experiments reveal the effectiveness of these methods for both sentence-level and wordlevel tasks. Crucially, our proposed solution does not require access to images at inference/test time, in line with the realistic scenario where images that describe sentential queries are not readily available. In future work we plan to improve our methods by exploiting the internal structure of images and sentences as well as by effectively integrating signals from more than two languages. Acknowledgments IV is supported by the ERC Consolidator Grant LEXICAL: Lexical Acquisition Across Languages (no 648909). GR and RR are supported by the Infomedia Magnet Grant and by an AOL grant on ”connected experience technologies”. 919 References Galen Andrew, Raman Arora, Jeff Bilmes, and Karen Livescu. 2013. Deep canonical correlation analysis. In Proceedings of ICML, pages 1247–1255. Stanislaw Antol, Aishwarya Agrawal, Jiasen Lu, Margaret Mitchell, Dhruv Batra, Lawrence C. Zitnick, and Devi Parikh. 2015. VQA: Visual question answering. In Proceedings of ICCV, pages 2425– 2433. Raman Arora and Karen Livescu. 2013. Multi-view CCA-based acoustic features for phonetic recognition across speakers and domains. In Proceedings of ICASSP, pages 7135–7139. Marco Baroni, Silvia Bernardini, Adriano Ferraresi, and Eros Zanchetta. 2009. The WaCky Wide Web: A collection of very large linguistically processed web-crawled corpora. Language Resources and Evaluation, 43(3):209–226. Lawrence W. Barsalou and Katja Wiemer-Hastings. 2005. Situating abstract concepts. In D. Pecher and R. Zwaan, editors, Grounding cognition: The role of perception and action in memory, language, and thought, pages 129–163. Shane Bergsma and Benjamin Van Durme. 2011. Learning bilingual lexicons using the visual similarity of labeled web images. In Proceedings of IJCAI, pages 1764–1769. Elia Bruni, Nam Khanh Tram, and Marco Baroni. 2014. Multimodal distributional semantics. Journal of Artificial Intelligence Research, 49:1–47. Richard H Byrd, Peihuang Lu, Jorge Nocedal, and Ciyou Zhu. 1995. A limited memory algorithm for bound constrained optimization. SIAM Journal on Scientific Computing, 16(5):1190–1208. Iacer Calixto and Qun Liu. 2017. Incorporating global visual features into attention-based neural machine translation. In Proceedings of EMNLP, pages 992– 1003. Iacer Calixto, Qun Liu, and Nick Campbell. 2017. Multilingual multi-modal embeddings for natural language processing. arXiv preprint arXiv:1702.01101. Sarath Chandar, Mitesh M Khapra, Hugo Larochelle, and Balaraman Ravindran. 2016. Correlational neural networks. Neural Computation, 28:257–285. Xiaobin Chang, Tao Xiang, and Timothy M. Hospedales. 2017. Deep multi-view learning with stochastic decorrelation loss. CoRR, abs/1707.09669. Grzegorz Chrupała, ´Akos K´ad´ar, and Afra Alishahi. 2015. Learning language through pictures. In Proceedings of ACL, pages 112–118. Desmond Elliott, Stella Frank, and Eva Hasler. 2015. Multilingual image description with neural sequence models. arXiv preprint arXiv:1510.04709. Desmond Elliott, Stella Frank, Khalil Sima’an, and Lucia Specia. 2016. Multi30K: Multilingual EnglishGerman image descriptions. In Proceedings of the 5th Workshop on Vision and Language, pages 70– 74. Ali Farhadi, Mohsen Hejrati, Mohammad Amin Sadeghi, Peter Young, Cyrus Rashtchian, Julia Hockenmaier, and David Forsyth. 2010. Every picture tells a story: Generating sentences from images. In Proceedings of ECCV, pages 15–29. Manaal Faruqui and Chris Dyer. 2014. Improving vector space word representations using multilingual correlation. In Proceedings of EACL, pages 462– 471. Yansong Feng and Mirella Lapata. 2010. Visual information in semantic representation. In Proceedings of NAACL-HLT, pages 91–99. Akira Fukui, Dong Huk Park, Daylen Yang, Anna Rohrbach, Trevor Darrell, and Marcus Rohrbach. 2016. Multimodal compact bilinear pooling for visual question answering and visual grounding. In Proceedings of EMNLP, pages 457–468. Ruka Funaki and Hideki Nakayama. 2015. Imagemediated learning for zero-shot cross-lingual document retrieval. In Proceedings of EMNLP, pages 585–590. Spandana Gella, Mirella Lapata, and Frank Keller. 2016. Unsupervised visual sense disambiguation for verbs using multimodal embeddings. In Proceedings of NAACL-HLT, pages 182–192. Spandana Gella, Rico Sennrich, Frank Keller, and Mirella Lapata. 2017. Image pivoting for learning multilingual multimodal representations. In Proceedings of EMNLP, pages 2839–2845. Goran Glavaˇs, Ivan Vuli´c, and Simone Paolo Ponzetto. 2017. If sentences could see: Investigating visual information for semantic textual similarity. In Proceedings of IWCS. Stevan Harnad. 1990. The symbol grounding problem. Physica D: Nonlinear Phenomena, 42(1–3). Mareike Hartmann and Anders Søgaard. 2017. Limitations of cross-lingual learning from image search. CoRR, abs/1709.05914. Felix Hill, Roi Reichart, and Anna Korhonen. 2014. Multi-modal models for concrete and abstract concept meaning. Transactions of the ACL, 2:285–296. Felix Hill, Roi Reichart, and Anna Korhonen. 2015. Simlex-999: Evaluating semantic models with (genuine) similarity estimation. Computational Linguistics, 41(4):665–695. 920 Julian Hitschler, Shigehiko Schamoni, and Stefan Riezler. 2016. Multimodal pivots for image caption translation. In Proceedings of ACL, pages 2399– 2409. Berthold K.P. Horn, Hugh M. Hilden, and Shahriar Negahdaripour. 1988. Closed-form solution of absolute orientation using orthonormal matrices. Journal of Optical Society of America, 5(7):1127–1135. Paul Horst. 1961. Generalized canonical correlations and their applications to experimental data. Journal of Clinical Psychology, 17(4):331–347. Harold Hotelling. 1936. Relations between two sets of variates. Biometrika, 28(3/4):321–377. Po-Yao Huang, Frederick Liu, Sz-Rung Shiang, Jean Oh, and Chris Dyer. 2016. Attention-based multimodal neural machine translation. In Proceedings of WMT, pages 639–645. Agnan Kessy, Alex Lewin, and Korbinian Strimmer. 2017. Optimal whitening and decorrelation. The American Statistician. Douwe Kiela. 2016. MMFeat: A toolkit for extracting multi-modal features. In Proceedings of ACL System Demonstrations, pages 55–60. Douwe Kiela and L´eon Bottou. 2014. Learning image embeddings using convolutional neural networks for improved multi-modal semantics. In Proceedings of EMNLP, pages 36–45. Douwe Kiela, Felix Hill, Anna Korhonen, and Stephen Clark. 2014. Improving multi-modal representations using image dispersion: Why less is sometimes more. In Proceedings of ACL, pages 835–841. Douwe Kiela, Anita Lilla Ver˝o, and Stephen Clark. 2016. Comparing data sources and architectures for deep visual representation learning in semantics. In Proceedings of EMNLP, pages 447–456. Douwe Kiela, Ivan Vuli´c, and Stephen Clark. 2015. Visual bilingual lexicon induction with transferred ConvNet features. In Proceedings of EMNLP, pages 148–158. Diederik Kingma and Jimmy Ba. 2015. Adam: A method for stochastic optimization. In Proceedings of ICLR (Conference Track). Ryan Kiros, Ruslan Salakhutdinov, and Rich Zemel. 2014. Multimodal neural language models. In Proceedings of ICML, pages 595–603. George Lakoff and Mark Johnson. 1999. Philosophy in the flesh: The embodied mind and its challenge to Western thought. Angeliki Lazaridou, Nghia The Pham, and Marco Baroni. 2015. Combining language and vision with a multimodal skip-gram model. In Proceedings of NAACL-HLT, pages 153–163. Ira Leviant and Roi Reichart. 2015. Judgment language matters: Multilingual vector space models for judgment language aware lexical semantics. CoRR, abs/1508.00106. Omer Levy, Yoav Goldberg, and Ido Dagan. 2015. Improving distributional similarity with lessons learned from word embeddings. Transactions of the ACL, 3:211–225. Chin-Yew Lin and Franz Josef Och. 2004. ORANGE: A method for evaluating automatic evaluation metrics for machine translation. In Proceedings of COLING, pages 501–507. Max M. Louwerse. 2011. Symbol interdependency in symbolic and embodied cognition. Topics in Cognitive Science, 59(1):617–645. Ang Lu, Weiran Wang, Mohit Bansal, Kevin Gimpel, and Karen Livescu. 2015. Deep multilingual correlation for improved word embeddings. In Proceedings of NAACL-HLT, pages 250–256. Junhua Mao, Wei Xu, Yi Yang, Jiang Wang, Zhiheng Huang, and Alan Yuille. 2015. Deep captioning with multimodal recurrent neural networks (mRNN). In Proceedings of ICLR (Conference Track). Junhua Mao, Wei Xu, Yi Yang, Jiang Wang, and Alan L Yuille. 2014. Explain images with multimodal recurrent neural networks. arXiv preprint arXiv:1410.1090. Tomer Michaeli, Weiran Wang, and Karen Livescu. 2016. Nonparametric canonical correlation analysis. In Proceedings of ICML, pages 1967–1976. Tomas Mikolov, Kai Chen, Greg Corrado, and Jeffrey Dean. 2013. Efficient estimation of word representations in vector space. In Proceedings of ICLR (Conference Track). Nikola Mrkˇsi´c, Ivan Vuli´c, Diarmuid ´O S´eaghdha, Ira Leviant, Roi Reichart, Milica Gaˇsi´c, Anna Korhonen, and Steve Young. 2017. Semantic specialization of distributional word vector spaces using monolingual and cross-lingual constraints. Transactions of the ACL, 5(1):309–324. Yusuke Mukuta and Harada. 2014. Probabilistic partial canonical correlation analysis. In Proceedings of ICML, pages 1449–1457. Hideki Nakayama and Noriki Nishida. 2017. Zeroresource machine translation by multimodal encoder–decoder network with multimedia pivot. Machine Translation, 31(1-2):49–64. Janarthanan Rajendran, Mitesh M. Khapra, Sarath Chandar, and Balaraman Ravindran. 2016. Bridge correlational neural networks for multilingual multimodal representation learning. In Proceedings of NAACL-HLT, pages 171–181. 921 B. Raja Rao. 1969. Partial canonical correlations. Trabajos de estadistica y de investigaci´on operativa, 20(2-3):211–219. Pushpendre Rastogi, Benjamin Van Durme, and Raman Arora. 2015. Multiview LSA: Representation learning via generalized CCA. In Proceedings of NAACLHLT, pages 556–566. Karen Simonyan and Andrew Zisserman. 2015. Very deep convolutional networks for large-scale image recognition. In Proceedings of ICLR (Workshop Track). Oriol Vinyals, Alexander Toshev, Samy Bengio, and Dumitru Erhan. 2015. Show and tell: A neural image caption generator. In Proceedings of CVPR, pages 3156–3164. Ivan Vuli´c, Douwe Kiela, Stephen Clark, and MarieFrancine Moens. 2016. Multi-modal representations for improved bilingual lexicon learning. In Proceedings of ACL, pages 188–194. ACL. Ivan Vuli´c, Roy Schwartz, Ari Rappoport, Roi Reichart, and Anna Korhonen. 2017. Automatic selection of context configurations for improved class-specific word representations. In Proceedings of CoNLL, pages 112–122. Weiran Wang, Raman Arora, Karen Livescu, and Jeff Bilmes. 2015a. On deep multi-view representation learning. In Proceedings of ICML, pages 1083– 1092. Weiran Wang, Raman Arora, Karen Livescu, and Nathan Srebro. 2015b. Stochastic optimization for deep CCA via nonlinear orthogonal iterations. In Proceedings of Communication, Control, and Computing, pages 688–695. Huijuan Xu and Kate Saenko. 2016. Ask, attend and answer: Exploring question-guided spatial attention for visual question answering. In Proceedings of ECCV, pages 451–466. Kelvin Xu, Jimmy Ba, Ryan Kiros, Kyunghyun Cho, Aaron Courville, Ruslan Salakhudinov, Rich Zemel, and Yoshua Bengio. 2015. Show, attend and tell: Neural image caption generation with visual attention. In Proceedings of ICML, pages 2048–2057. Fei Yan and Krystian Mikolajczyk. 2015. Deep correlation for matching images and text. In Proceedings of CVPR, pages 3441–3450. Peter Young, Alice Lai, Micah Hodosh, and Julia Hockenmaier. 2014. From image descriptions to visual denotations: New similarity metrics for semantic inference over event descriptions. Transactions of the ACL, 2:67–78.
2018
84
Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Long Papers), pages 922–933 Melbourne, Australia, July 15 - 20, 2018. c⃝2018 Association for Computational Linguistics 922 Illustrative Language Understanding: Large-Scale Visual Grounding with Image Search Jamie Ryan Kiros* William Chan* Google Brain Toronto {kiros, williamchan, geoffhinton}@google.com Geoffrey E. Hinton Abstract We introduce Picturebook, a large-scale lookup operation to ground language via ‘snapshots’ of our physical world accessed through image search. For each word in a vocabulary, we extract the top-k images from Google image search and feed the images through a convolutional network to extract a word embedding. We introduce a multimodal gating function to fuse our Picturebook embeddings with other word representations. We also introduce Inverse Picturebook, a mechanism to map a Picturebook embedding back into words. We experiment and report results across a wide range of tasks: word similarity, natural language inference, semantic relatedness, sentiment/topic classification, image-sentence ranking and machine translation. We also show that gate activations corresponding to Picturebook embeddings are highly correlated to human judgments of concreteness ratings. 1 Introduction Constructing grounded representations of natural language is a promising step towards achieving human-like language learning. In recent years, a large amount of research has focused on integrating vision and language to obtain visually grounded word and sentence representations. One source of grounding, which has been utilized in existing work, is image search engines. Search engines allow us to obtain correspondences between language and images that are far less restricted than existing multimodal datasets which typically have restricted vocabularies. While true natural language understanding may require fully *Both authors contributed equally to this work. embodied cognition, search engines allow us to get a form of quasi-grounding from high-coverage ‘snapshots’ of our physical world provided by the interaction of millions of users. One place to incorporate grounding is in the lookup table that maps tokens to vectors. The dominant approach to learning distributed word representations is through indexing a learned matrix. While immensely successful, this lookup operation is typically learned through co-occurrence objectives or a task-dependent reward signal. A very different way to obtain word embeddings is to aggregate features obtained by using the word as a query for an image search engine. This involves retrieving the top-k images from a search engine, running those through a convolutional network and aggregating the results. These word embeddings are grounded via the retrieved images. While several authors have considered this approach, it has been largely limited to a few thousand queries and only a small number of tasks. In this paper we introduce Picturebook embeddings produced by image search using words as queries. Picturebook embeddings are obtained through a convolutional network trained with a semantic ranking objective on a proprietary image dataset with over 100+ million images (Wang et al., 2014). Using Google image search, a Picturebook embedding for a word is obtained by concatenating the k-feature vectors of our convolutional network on the top-k retrieved search results. The main contributions of our work are as follows: • We obtain Picturebook embeddings for the 2.2 million words that occur in the Glove vocabulary (Pennington et al., 2014) 1, allowing each word to have a Glove embedding and a parallel grounded word representation. This collection of word representations that we visually 1Common Crawl, 840B tokens 923 ground via image search is 2-3 orders of magnitude larger than prior work. • We introduce a multimodal gating mechanism to selectively choose between Glove and Picturebook embeddings in a task-dependent way. We apply our approach to over a dozen datasets and several different tasks: word similarity, sentence relatedness, natural language inference, topic/sentiment classification, image sentence ranking and Machine Translation (MT). • We introduce Inverse Picturebook to perform the inverse lookup operation. Given a Picturebook embedding, we find the closest words which would generate the embedding. This is useful for generative modelling tasks. • We perform an extensive analysis of our gating mechanism, showing that the gate activations for Picturebook embeddings are highly correlated with human judgments of concreteness. We also show that Picturebook gate activations are negatively correlated with image dispersion (Kiela et al., 2014), indicating that our model selectively chooses between word embeddings based on their abstraction level. • We highlight the importance of the convolutional network used to extract embeddings. In particular, networks trained with semantic labels result in better embeddings than those trained with visual labels, even when evaluating similarity on concrete words. 2 Related Work The use of image search for obtaining word representations is not new. Table 1 illustrates existing methods that utilize image search and the tasks considered in their work. There has also been other work using other image sources such as ImageNet (Kiela and Bottou, 2014; Collell and Moens, 2016) over the WordNet synset vocabulary, and using Flickr photos and captions (Joulin et al., 2016). Our approach differs from the above methods in three main ways: a) we obtain searchgrounded representations for over 2 million words as opposed to a few thousand, b) we apply our representations to a higher diversity of tasks than previously considered, and c) we introduce a multimodal gating mechanism that allows for a more flexible integration of features than mere concatenation. Our work also relates to existing multimodal models combining different representations of the data (Hill and Korhonen, 2014). Various work has Method tasks (Bergsma and Durme, 2011) bilingual lexicons (Bergsma and Goebel, 2011) lexical preference (Kiela et al., 2014) word similarity (Kiela et al., 2015a) lexical entailment detection (Kiela et al., 2015b) bilingual lexicons (Shutova et al., 2016) metaphor identification (Bulat et al., 2015) predicting property norms (Kiela, 2016) toolbox (Vulic et al., 2016) bilingual lexicons (Kiela et al., 2016) word similarity (Anderson et al., 2017) decoding brain activity (Glavas et al., 2017) semantic text similarity (Bhaskar et al., 2017) abstract vs concrete nouns (Hartmann and Sogaard, 2017) bilingual lexicons (Bulat et al., 2017) decoding brain activity Table 1: Existing methods that use image search for grounding and their corresponding tasks. also fused text-based representations with imagebased representations (Bruni et al., 2014; Lazaridou et al., 2015; Chrupala et al., 2015; Mao et al., 2016; Silberer et al., 2017; Kiela et al., 2017; Collell et al., 2017; Zablocki et al., 2018) and representations derived from a knowledge-graph (Thoma et al., 2017). More recently, gating-based approaches have been developed for fusing traditional word embeddings with visual representations. Arevalo et al. (2017) introduce a gating mechanism inspired by the LSTM while Kiela et al. (2018) describe an asymmetric gate that allows one modality to ‘attend’ to the other. The work that most closely matches ours is that of Wang et al. (2018) who also consider fusing Glove embeddings with visual features. However, their analysis is restricted to word similarity tasks and they require text-to-image regression to obtain visual embeddings for unseen words, due to the use of ImageNet. The use of image search allows us to obtain visual embeddings for a virtually unlimited vocabulary without needing a mapping function. 3 Picturebook Embeddings Our Picturebook embeddings ground language using the ‘snapshots’ returned by an image search engine. Given a word (or phrase), we image search for the top-k images and extract the images. We then pass each image through a CNN trained with a semantic ranking objective to extract its embedding. Our Picturebook embeddings reflect the search rankings by concatenating the individual embeddings in the order of the search results. We can perform all of these operations offline to construct a matrix Ep representing the Picturebook 924 embeddings over a vocabulary. 3.1 Inducing Picturebook Embeddings The convolutional network used to obtain Picturebook embeddings is based off of Wang et al. (2014). Let pi, p+ i , p− i denote a triplet of query, positive and negative images, respectively. We define the following hinge loss for a given triplet as follows: l(pi, p+ i , p− i ) = max{0, g + D(f(pi), f(p+ i )) −D(f(pi), f(p− i ))} (1) where f(pi) represents the embedding of image pi, D(·, ·) is the Euclidean distance and g is a margin (gap) hyperparameter. Suppose we have available pairwise relevance scores ri,j = r(pi, pj) indicating the similarity of images pi and pj. The objective function that is optimized is given by: min X i ⇠i + λkWk2 2 s.t. :l(pi, p+ i , p− i ) ⇠i 8pi, p+ i , p− i such that r(pi, p+ i ) > r(pi, p− i ) (2) where ⇠i are slack variables and W is a vector of the network’s model parameters. The model is trained end-to-end using a proprietary dataset with 100+ million images. We refer the reader to Wang et al. (2014) for additional details of training, including the specifics of the architecture used. After the model is trained, we can use the convolutional network as a feature extractor for images by computing an embedding vector f(p) for an image p. Suppose we would like to obtain a Picturebook embedding for a given word w. We first perform an image search with query w to obtain a ranked list of images pw 1 , . . . , pw k . The Picturebook embedding for a word w is then represented as: ep(w) = [f(pw 1 ); f(pw 2 ); . . . ; f(pw k )] (3) namely, the concatenation of the feature vectors in ranked order. In our model, each embedding results in a 64-dimensional vector with the final Picturebook embedding being 64 ⇤k dimensions. Most of our experiments use k = 10 images resulting in a word embedding size of 640. To obtain the full collection of embeddings, we run the full Glove vocabulary (2.2M words) through image search to obtain a corresponding Picturebook embedding to each word in the Glove vocabulary. 3.2 Visual vs Semantic Similarity The training procedure is heavily influenced by the choice of similarity function ri,j. We consider two types of image similarity: visual and semantic. As an example, an image of a blue car would have high visual similarity to other blue cars but would have higher semantic similarity to cars of the same make, independent of color. In our experiments we consider two types of Picturebook embedding: one trained through optimizing for visual similarity and another for semantic similarity. As we will show in our experiments, the semantic Picturebook embeddings result in representations that are more useful for natural language processing tasks than the visual embeddings. 3.3 Multimodal Fusion Gating Picturebook embeddings on their own are likely to be useful for representing concrete words but it is not clear whether they will be of benefit for abstract words. Consequently, we would like to fuse our Picturebook embeddings with other sources of information, for example Glove embeddings (Pennington et al., 2014) or randomly initialized embeddings that will be trained. Let eg = eg(w) be our other embedding (i.e., Glove) for a word w and ep = ep(w) be our Picturebook embedding. We fuse our embeddings using a multimodal gating mechanism: g = σ(eg, ep) (4) e = g ⊙φ(eg) + (1 −g) ⊙ (ep) (5) where σ is a 1 hidden layer DNN with ReLU activations and sigmoid outputs, φ and are 1 hidden layer DNNs with ReLU activations and tanh outputs. The gating DNN σ allows the model to learn how visual a word is as a function of its input ep and eg. Similar gating mechanisms can be found in LSTMs (Hochreiter and Schmidhuber, 1997) and other multimodal models (Arevalo et al., 2017; Wang et al., 2018; Kiela et al., 2018). On some experiments we found it beneficial to include a skip connection from the hidden layer of σ. We chose this form of fusion over other approaches, such as CCA variants and metric learning methods, to allow for easier interpretability and analysis. We leave comparison of alternative fusion strategies for future work. 3.4 Contextual Gating The gating described above is non-contextual, in the sense that each embedding computes a gate 925 value independent of the context the words occur in. In some cases it may be beneficial to use contextual gates that are aware of the sentence that words appear in to decide how to weight Glove and Picturebook embeddings. For contextual gates, we use the same approach as above except we replace the controller σ(eg, ep) with inputs that have been fed through a bidirectionalLSTM, e.g. σ(BiLSTM(eg),BiLSTM(ep)). We experiment with contextual gating for all experiments that use a bidirectional-LSTM encoder. 3.5 Inverse Picturebook Picturebook embeddings can be seen as a form of implicit image search: given a word (or phrase), image search the word query and concatenate the embeddings of the images produced by a CNN. Up until now, we have only discussed scenarios where we have a word and we want to perform this implicit search operation. In generative modelling problems (i.e., MT), we want to perform the opposite operation. Given a Picturebook embedding, we want to find the closest word or phrase aligned to the representation. For example, given the word ‘bicycle’ in English and its Picturebook embedding, we want to find the closest French word that would generate this representation (i.e., ‘v´elo’). We want to perform this inverse image search operation given its Picturebook embedding. We introduce a differentiable mechanism which allows us to align words across source and target languages in the Picturebook embedding domain. Let h be our internal representation of our model (i.e., seq2seq decoder state), and ei be the i-th word embedding from our Picturebook embedding matrix Ep: p(yi|h) = exp(hh, eii) P j exp(hh, eji) (6) Given a representation h, Equation 6 simply finds the most similar word in the embedding space. This can be easily implemented by setting the output softmax matrix as the transpose of the Picturebook embedding matrix Ep. In practice, we find adding additional parameters helps with learning: p(yi|h) = exp(hh, ei + e0 ii + bi) P j exp(hh, ej + e0 ji + bj) (7) where e0 i is a trainable weight vector per word and bi is a trainable bias per word. A similar technique to tie the softmax matrix as the transpose of the embedding matrix can be found in language modelling (Press and Wolf, 2017; Inan et al., 2017). 4 Experiments To evaluate the effectiveness of our embeddings, we perform both quantitative and qualitative evaluation across a wide range of natural language processing tasks. Hyperparameter details of each experiment are included in the appendix. Since the use of Picturebook embeddings adds extra parameters to our models, we include a baseline for each experiment (either based on Glove or learned embeddings) that we extensively tune. In most experiments, we end up with baselines that are stronger than what has previously been reported. 4.1 Nearest neighbours In order to get a sense of the representations our model learns, we first compute nearest neighbour results of several words, shown in Table 2. These results can be interpreted as follows: the words that appear as neighbours are those which have semantically similar images to that of the query. Often this captures visual similarity as well. Some words capture multimodality, such as ‘deep’ referring both to deep sea as well as to AI. Searching for cities returns cities which have visually similar characteristics. Words like ‘sun’ also return the corresponding word in different languages, such as ‘Sol’ in Spanish and ‘Soleil’ in French. Finally, it’s worth highlighting that the most frequent association of a word may not be what is represented in image search results. For example, the word ‘is’ returns words related to terrorists and ISIS and ‘it’ returns words related to scary and clowns due to the 2017 film of the same name. We also report nearest neighbour examples across languages in Appendix A.1. 4.2 Word similarity Our first quantitative experiment aims to determine how well Picturebook embeddings capture word similarity. We use the SimLex-999 dataset (Hill et al., 2015) and report results across 9 categories: all (the whole evaluation), adjectives, nouns, verbs, concreteness quartiles and the hardest 333 pairs. For the concreteness quartiles, the first quartile corresponds to the most abstract words, while the last corresponds to the most concrete words. The hardest pairs are those for which similarity is difficult to distinguish from relatedness. This is an interesting category since image-based word embeddings are perhaps less likely to confuse similarity with relatedness than distributional-based methods. For Glove, scores 926 language deep network Melbourne association sun life not interdisciplinary deepest internet Austin inclusion prominence praising Nosign languages deep-sea cyberspace Raleigh committees Sol rejoicing prohibited literacy manta networks Cincinnati social Soleil freedom Forbidden sociology depths blueprints Yokohama groupe Sole glorifying no multilingual Jarvis connectivity Cleveland members Venere worshipping no-fly inclusion cyber interconnections Tampa participation Marte healed forbid communications AI blueprint Pittsburgh personnel eclipses praise 10 linguistics hackers AI Boston involvement Venus healing prohibiting values restarting interconnected Rochester staffing eclipse trust forbidden user-generated diver tech Frankfurt meetings fireballs happiness Stop Table 2: Nearest neighbours of words. Results are retrieved over the 100K most frequent words. Model all adjs nouns verbs conc-q1 conc-q2 conc-q3 conc-q4 hard Glove 40.8 62.2 42.8 19.6 43.3 41.6 42.3 40.2 27.2 Picturebook 37.3 11.7 48.2 17.3 14.4 27.5 46.2 60.7 28.8 Glove + Picturebook 45.5 46.2 52.1 22.8 36.7 41.7 50.4 57.3 32.5 Picturebook (Visual) 31.3 11.1 38.8 20.4 13.9 26.1 38.7 47.7 23.9 Picturebook (Semantic) 37.3 11.7 48.2 17.3 14.4 27.5 46.2 60.7 28.8 Picturebook (1) 24.5 2.6 33.5 12.1 4.7 17.8 32.8 47.8 13.6 Picturebook (2) 28.4 6.5 38.9 9.0 5.0 21.3 34.3 55.1 15.7 Picturebook (3) 30.3 11.9 41.9 3.1 2.6 24.3 37.5 58.3 18.4 Picturebook (5) 34.4 6.8 44.5 18.0 9.0 27.9 42.8 58.3 25.9 Picturebook (10) 37.3 11.7 48.2 17.3 14.4 27.5 46.2 60.7 28.8 Table 3: SimLex-999 results (Spearman’s ⇢). Best results overall are bolded. Best results per section are underlined. Bracketed numbers signify the number of images used. Some rows are copied across sections for ease of reading. are computed via cosine similarity. For computing a score between 2 word pairs with Picturebook, we set s(w(1), w(2)) = −mini,j d(e(1) i , e(2) j ). 2 That is, the score is minus the smallest cosine distance between all pairs of images of the two words. Note that this reduces to negative cosine distance when using only 1 image per word. We also report results combining Glove and Picturebook by summing their two independent similarity scores. By default, we use 10 images for each embedding using the semantic convolutional network. Table 3 displays our results, from which several observations can be made. First, we observe that combining Glove and Picturebook leads to improved similarity across most categories. For adjectives and the most abstract category, Glove performs significantly better, while for the most concrete category Picturebook is significantly better. This result confirms that Glove and Picturebook capture very different properties of words. Next we observe that the performance of Picturebook gets progressively better across each concreteness quartile rating, with a 20 point improvement over Glove for the most concrete category. 2We found scoring all pairs of images to outperform scoring only the corresponding equally ranked image. For the hardest subset of words, Picturebook performs slightly better than Glove while Glove performs better across all pairs. We also compare to a convolutional network trained with visual similarity. We observe a performance difference between our visual and semantic embeddings: on all categories except verbs, the semantic embeddings outperform visual ones, even on the most concrete categories. This indicates the importance of the type of similarity used for training the model. Finally we note that adding more images nearly consistently improves similarity scores across categories. Kiela et al. (2016) showed that after 10-20 images, performance tends to saturate. All subsequent experiments use 10 images with semantic Picturebook. 4.3 Sentential Inference and Relatedness We next consider experiments on 3 pairwise prediction datasets: SNLI (Bowman et al., 2015), MultiNLI (Williams et al., 2017) and SICK (Marelli et al., 2014). The first two are natural language inference tasks and the third is a sentence semantic relatedness task. We explore the use of two types of sentential encoders: Bag-of-Words (BoW) and BiLSTM-Max (Conneau et al., 2017a). 927 Model SNLI MultiNLI SICK Relatedness dev test dev-mat dev-mis test-p test-s test-mse Glove (bow) 85.2 84.2 70.5 69.9 86.8 79.8 25.2 Picturebook (bow) 84.0 83.8 67.9 67.1 85.8 79.3 27.0 Glove + Picturebook (bow) 86.2 85.2 71.3 70.9 87.2 80.9 24.4 BiLSTM-Max (Conneau et al., 2017a) 85.0 84.5 Glove 86.8 86.3 74.1 74.5 Picturebook 85.2 85.1 70.7 70.3 Glove + Picturebook 86.7 86.1 73.7 73.7 Glove + Picturebook + Contextual Gating 86.9 86.5 74.2 74.4 Table 4: Classification accuracies are reported for SNLI and MulitNLI. For SICK we report Pearson, Spearman and MSE. Higher is better for all metrics except MSE. Best results overall per column are bolded. Best results per section are underlined. Three sets of features are used: Glove only, Picturebook only and Glove + Picturebook. For the latter, we use multimodal gating for all encoders and contextual gating in the BiLSTM-Max model. For SICK, we follow previous work and report average results across 5 runs (Tai et al., 2015). Due to the small size of the dataset, we only experiment with BoW on SICK. The full details of hyperparameters are discussed in Appendix B. Table 4 displays our results. For BoW models, adding Picturebook embeddings to Glove results in significant gains across all three tasks. For BiLSTM-Max, our contextual gating sets a new state-of-the-art on SNLI sentence encoding methods (methods without interaction layers), outperforming the recently proposed methods of Im and Cho (2017); Shen et al. (2018). It is worth noting the effect that different encoders have when using our embeddings. While non-contextual gating is sufficient to improve bag-of-words methods, with BiLSTM-Max it slightly hurts performance over the Glove baseline. Adding contextual gating was necessary to improve over the Glove baseline on SNLI. Finally we note the strength of our own Glove baseline over the reported results of Conneau et al. (2017a), from which we improve on their accuracy from 85.0 to 86.8 on the development set. 3 4.4 Sentiment and Topic Classification Our next set of experiments aims to determine how well Picturebook embeddings do on tasks that are primarily non-visual, such as topic and sentiment classification. We experiment with 7 datasets provided by Zhang et al. (2015) and compare bag-ofwords models against n-gram baselines provided 3All reported results on SNLI are available at https: //nlp.stanford.edu/projects/snli/ by the authors as well as fastText (Joulin et al., 2017). Hyperparameter details are reported in Appendix B. Our experimental results are provided in Table 5. Perhaps unsurprisingly, adding Picturebook to Glove matches or only slightly improves on 5 out of 7 tasks and obtains a lower result on AG News and Yahoo. Our results show that Picturebook embeddings, while minimally aiding in performance, can perform reasonably well on their own - outperforming the n-gram baselines of (Zhang et al., 2015) on 5 out of 7 tasks and the unigram fastText baseline on all 7 tasks. This result shows that our embeddings are able to work as a general text embedding, though they typically lag behind Glove. We note that the best performing methods on these tasks are based on convolutional neural networks (Conneau et al., 2017b). 4.5 Image-Sentence Ranking We next consider experiments that map images and sentences into a common vector space for retrieval. Here, we utilize VSE++ (Faghri et al., 2017) as our base model and evaluate on the COCO dataset (Lin et al., 2014). VSE++ improves over the original CNN-LSTM embedding method of Kiros et al. (2015a) by using hard negatives instead of summing over contrastive examples. We re-implement their model with 2 modifications: 1) we replace the unidirectional LSTM encoder with a BiLSTM-Max sentence encoder and 2) we use Inception-V3 (Szegedy et al., 2016) as our CNN instead of ResNet 152 (He et al., 2016). As in previous work, we report the mean Recall@K (R@K) and the median rank over 1000 images and 5000 sentences. Full details of the hyperparameters are in Appendix B. Table 6 displays our results on this task. 928 Model AG DBP Yelp P. Yelp F. Yah. A. Amz. F. Amz. P. BoW (Zhang et al., 2015) 88.8 96.6 92.2 58.0 68.9 54.6 90.4 ngrams (Zhang et al., 2015) 92.0 98.6 95.6 56.3 68.5 54.3 92.0 ngrams TFIDF (Zhang et al., 2015) 92.4 98.7 95.4 54.8 68.5 52.4 91.5 fastText (Joulin et al., 2017) 91.5 98.1 93.8 60.4 72.0 55.8 91.2 fastText-bigram (Joulin et al., 2017) 92.5 98.6 95.7 63.9 72.3 60.2 94.6 Glove (bow) 94.0 98.6 94.4 61.7 74.1 58.5 93.2 Picturebook (bow) 92.8 98.5 94.4 61.6 73.3 57.8 92.9 Glove + Picturebook (bow) 93.9 98.6 94.5 61.9 73.8 58.7 93.2 Table 5: Test accuracy [%] on topic and sentiment classification datasets. Best results per dataset are bolded, best results per section are underlined. We compare directly against other bag of ngram baselines. Image Annotation Image Search Model R@1 R@5 R@10 Med r R@1 R@5 R@10 Med r VSE++ (Faghri et al., 2017) 64.6 95.7 1 52.0 92.0 1 Glove 64.6 88.9 95.5 1 53.7 86.5 94.4 1 Picturebook 62.4 90.2 95.3 1 54.2 86.4 94.3 1 Glove + Picturebook 61.8 89.2 95.0 1 54.1 86.7 94.7 1 Glove + Picturebook + Contextual Gating 63.4 90.3 96.5 1 55.2 87.2 94.4 1 Table 6: COCO test-set results for image-sentence retrieval experiments. Our models use VSE++. R@K is Recall@K (high is good). Med r is the median rank (low is good). Our Glove baseline was able to match or outperform the reported results in Faghri et al. (2017) with the exception of Recall@10 for image annotation, where it performs slightly worse. Glove+Picturebook improves over the Glove baseline for image search but falls short on image annotation. However, using contextual gating results in improvements over the baseline on all metrics except R@1 for image annotation. Our reported results have been recently outperformed by Gu et al. (2018); Huang et al. (2018b); Lee et al. (2018), which are more sophisticated methods that incorporate generative modelling, reinforcement learning and attention. 4.6 Machine Translation We experiment with the Multi30k (Elliott et al., 2016, 2017) dataset for MT. We compare our Picturebook models with other text-only nonensembled models on the Flickr Test2016, Flickr Test2017 and MSCOCO test sets from Caglayan et al. (2017), the winner of the WMT 17 Multimodal Machine Translation competition (Elliott et al., 2017). We use the standard seq2seq (Sutskever et al., 2015) with content-based attention (Bahdanau et al., 2015) model and we describe our hyperparmeters in Appendix B. Table 7 summarizes our English ! German results and Table 8 summarizes our English ! French results. We find our models to perform better in BLEU than METEOR relatively compared to (Caglayan et al., 2017). We believe this is due to the fact we did not use Byte Pair Encoding (BPE) (Sennrich et al., 2016), and METEOR captures word stemming (Denkowski and Lavie, 2014). This is also highlighted where our French models perform better than our German models relatively, due to the compounding nature of German words. Since seq2seq MT models are typically trained without Glove embeddings, we also did not use Glove embeddings for this task, but rather we combine randomly initialized learnable embeddings with the fixed Picturebook embeddings. We find the gating mechanism not to help much with the MT task since the trainable embeddings are free to change their norm magnitudes. We did not experiment with regularizing the norm of the embeddings. On the English ! German tasks, we find our Picturebook model to perform on average 0.8 BLEU or 0.7 METEOR over our baseline. On the German task, compared to the previously best published results (Caglayan et al., 2017) we do better in BLEU but slightly worse in METEOR. We suspect this is due to the fact that we did not use BPE. On the English ! French task, the Picturebook models do on average 1.2 BLEU better or 1.0 METEOR over our baseline. We also report results for the IWSLT 2014 German-English task (Cettolo et al., 2014) in Table 9. Compared to our baseline, we report a gain of 0.3 and 1.1 BLEU for German ! English and English ! German respectively. We 929 Model Test2016 Test2017 MSCOCO BLEU METEOR BLEU METEOR BLEU METEOR BPE (Caglayan et al., 2017) 38.1 57.3 30.8 51.6 26.4 46.8 Baseline 38.9 56.5 32.6 50.7 26.8 45.4 Picturebook 39.6 56.9 31.8 50.1 27.7 45.8 Picturebook + Inverse Picturebook 40.2 57.2 32.3 50.7 27.8 46.3 Picturebook + Inverse Picturebook + Gating 40.0 57.3 33.0 51.1 27.9 46.5 Table 7: Machine Translation results on the Multi30k English ! German task. We note that our models do not use BPE, and we perform better in BLEU relative to METEOR. Model Test2016 Test2017 MSCOCO BLEU METEOR BLEU METEOR BLEU METEOR BPE (Caglayan et al., 2017) 52.5 69.6 50.4 67.5 41.2 61.3 Baseline 60.7 74.1 52.3 67.4 42.8 60.6 Picturebook 61.0 74.2 52.4 67.5 43.1 61.0 Picturebook + Inverse Picturebook 61.8 75.0 52.6 67.7 42.8 61.2 Picturebook + Inverse Picturebook + Gating 62.1 75.2 53.6 68.4 43.8 61.6 Table 8: Machine Translation results on the Multi30k English ! French task. report new state-of-the-art results for the English ! German task at 25.4 BLEU, while our German ! English model achieves 29.6 BLEU which is slightly behind the recently proposed Neural Phrase-based Machine Translation (NPMT) model at 29.9 (Huang et al., 2018a). We note that the NPMT is not a seq2seq model and can be augmented with our Picturebook embeddings. We also note that our models may not be directly comparable to previously published seq2seq models from (Wiseman and Rush, 2016; Bahdanau et al., 2017) since we used a deeper encoder and decoder. 4.7 Limitations We explored the use of Picturebook for larger machine translation tasks, including the popular WMT14 benchmarks. For these tasks, we found that models that incorporate Picturebook led to faster convergence. However, we were not able to improve upon BLEU scores from equivalent models that do not use Picturebook. This indicates that while our embeddings are useful for smaller MT experiments, further research is needed on how to best incorporate grounded representations in larger translation tasks. 4.8 Gate Analysis In this section we perform an extensive analysis of the gating mechanism for models trained across datasets used in our experiments. In our first experiment, we aim to determine how well gate activations correlate to a) human judgments of concreteness and b) image dispersion (Kiela et al., 2014). For concreteness ratings, we use the dataset of Brysbaert et al. (2013) which provides ratings for 40,000 English lemmas. Image dispersion is the average distance between all pairs of images returned from a search query. It was shown in Kiela et al. (2014) that abstract words tend to have higher dispersion ratings, due to having much higher variety in the types of images returned from a query. On the other hand, low dispersion ratings were more associated with concrete words. For each word, we compute the mean gate activation value for Picturebook embeddings. 4 For concreteness ratings, we take the intersection of words that have ratings with the dataset vocabulary. We then compute the Spearman correlation of mean gate activations with a) concreteness ratings and b) image dispersion scores. Table 10 illustrates the result of this analysis. We observe that gates have high correlations with concreteness ratings and strong negative correlations with image dispersion scores. Moreover, this result holds true across all datasets, even those that are not inherently visual. These results provide evidence that our gating mechanism actively prefers Glove embeddings for abstract words and Picturebook embeddings for concrete words. Appendix A contains examples of words that most strongly activate Glove and Picturebook gates. 4We only consider non-contextualized gates. 930 Model DE ! EN BLEU EN ! DE BLEU MIXER (Ranzato et al., 2016) 21.8 Beam Search Optimization (Wiseman and Rush, 2016) 25.5 Actor-Critic + Log Likelihood (Bahdanau et al., 2017) 28.5 Neural Phrase-based Machine Translation (Huang et al., 2018a) 29.9 25.1 Baseline 29.3 24.3 Picturebook 29.6 25.4 Table 9: Machine Translation results on the IWSLT 2014 German-English task. Rank SNLI MultiNLI COCO AG-News DBpedia Yelp Amazon ccorr disp ccorr disp ccorr disp ccorr disp ccorr disp ccorr disp ccorr disp top-1% 73 -41 39 -27 53 -22 60 -16 56 -30 47 -28 32 -17 top-10% 54 -39 48 -34 34 -23 52 -24 54 -32 49 -26 50 -30 all 35 -30 30 -27 21 -16 36 -17 39 -30 24 -20 33 -31 Table 10: Correlations (rounded, x100) of mean Picturebook gate activations to human judgements of concreteness ratings (ccorr) and image dispersion (disp) within the specified most frequent words. (a) SNLI (b) MultiNLI (c) AG-News Figure 1: POS analysis. Top bar for each tag is Glove, bottom is Picturebook. Tags are sorted by Glove frequencies. Results taken over the top 100 mean activation values within the 10K most frequent words. Finally we analyze the parts-of-speech (POS) of the highest activated words. These results are shown in Figure 1. The highest scoring Picturebook words are almost all singular and plural nouns (NN / NNS). We also observe tags which are exclusively Glove oriented, namely adverbs (RB), prepositions (IN) and determiners (DT). 5 Conclusion Traditionally, word representations have been built on co-occurrences of neighbouring words; and such representations only make use of the statistics of the text distribution. Picturebook embeddings offer an alternative approach to constructing word representations grounded in image search engines. In this work we demonstrated that Picturebook complements traditional embeddings on a wide variety of tasks. Through the use of multimodal gating, our models lead to interpretable weightings of abstract vs concrete words. In future work, we would like to explore other aspects of search engines for language grounding as well as the effect these embeddings may have on learning generic sentence representations (Kiros et al., 2015b; Hill et al., 2016; Conneau et al., 2017a; Logeswaran and Lee, 2018). Recently, contextualized word representations have shown promising improvements when combined with existing embeddings (Melamud et al., 2016; Peters et al., 2017; McCann et al., 2017; Peters et al., 2018). We expect that integrating Picturebook with these embeddings to lead to further performance improvements as well. Acknowledgments The authors would like to thank Chuck Rosenberg, Tom Duerig, Neil Alldrin, Zhen Li, Filipe Gonc¸alves, Mia Chen, Zhifeng Chen, Samy Bengio, Yu Zhang, Kevin Swersky, Felix Hill and the ACL anonymous reviewers for their valuable advice and feedback. 931 References Andrew Anderson, Douwe Kiela, Stephen Clark, and Massimo Poesio. 2017. Visually Grounded and Textual Semantic Models Differentially Decode Brain Activity Associated with Concrete and Abstract Nouns. In ACL. John Arevalo, Thamar Solorio, Manuel Montes y Gomez, and Fabio A. Gonzalez. 2017. Gated Multimodal Units for Information Fusion. In arXiv:1702.01992. Jimmy Ba, Jamie Kiros, and Geoffrey Hinton. 2016. Layer Normalization. In arXiv:1607.06450. Dzmitry Bahdanau, Philemon Brakel, Kelvin Xu, Anirudh Goyal, Ryan Lowe, Joelle Pineau, Aaron Courville, and Yoshua Bengio. 2017. An ActorCritic Algorithm for Sequence Prediction. In ICLR. Dzmitry Bahdanau, Kyunghyun Cho, and Yoshua Bengio. 2015. Neural Machine Translation by Jointly Learning to Align and Translate. In ICLR. Shane Bergsma and Benjamin Van Durme. 2011. Learning Bilingual Lexicons using the Visual Similarity of Labeled Web Images. In IJCAI. Shane Bergsma and Randy Goebel. 2011. Using Visual Information to Predict Lexical Preference. In RANLP. Sai Abishek Bhaskar, Maximilian Koper, Sabine Schulte Im Walde, and Diego Frassinelli. 2017. Exploring Multi-Modal Text+Image Models to Distinguish between Abstract and Concrete Nouns. In IWCS Workshop on Foundations of Situated and Multimodal Communication. Samuel Bowman, Gabor Angeli, Christopher Potts, and Christopher Manning. 2015. A large annotated corpus for learning natural language inference. In EMNLP. Elia Bruni, Nam Khanh Tran, and Marco Baroni. 2014. Multimodal Distributional Semantics. Journal of Artificial Intelligence Research 49(1). Marc Brysbaert, Amy Beth Warriner, and Victor Kuperman. 2013. Concreteness ratings for 40 thousand generally known English word lemmas. Behavior Research Methods 46(3). Luana Bulat, Stephen Clark, and Ekaterina Shutova. 2017. Speaking, Seeing, Understanding: Correlating semantic models with conceptual representation in the brain. In EMNLP. Luana Bulat, Douwe Kiela, and Stephen Clark. 2015. Vision and Feature Norms: Improving automatic feature norm learning through cross-modal maps. In NAACL. Ozan Caglayan, Walid Aransa, Adrien Bardet, Mercedes Garcia-Martinez, Marc Masana, Luis Herranz, and Joost van de Weijer. 2017. LIUM-CVC Submissions for WMT17 Multimodal Translation Task. In Conference on Machine Translation. Mauro Cettolo, Jan Niehues, Sebastian Stuker, Luisa Bentivogli, and Marcello Federico. 2014. Report on the 11th IWSLT Evaluation Campaign, IWSLT 2014. In IWSLT. Grzegorz Chrupala, Akos Kadar, and Afra Alishah. 2015. Learning language through pictures. In EMNLP. Guillem Collell and Marie-Francine Moens. 2016. Is an Image Worth More than a Thousand Words? On the Fine-Grain Semantic Differences between Visual and Linguistic Representations. In COLING. Guillem Collell, Ted Zhang, and Marie-Francine Moens. 2017. Imagined Visual Representations as Multimodal Embeddings. In AAAI. Alexis Conneau, Douwe Kiela, Holger Schwenk, Lo¨ıc Barrault, and Antoine Bordes. 2017a. Supervised Learning of Universal Sentence Representations from Natural Language Inference Data. In EMNLP. Alexis Conneau, Holger Schwenk, Lo¨ıc Barrault, and Yann Lecun. 2017b. Very deep convolutional networks for text classification. In EACL. Michael Denkowski and Alon Lavie. 2014. Meteor universal: Language specific translation evaluation for any target language. In EACL: Workshop on Statistical Machine Translation. Desmond Elliott, Stella Frank, Loic Barrault, Fethi Bougares, and Lucia Specia. 2017. Findings of the Second Shared Task on Multimodal Machine Translation and Multilingual Image Description. In Conference on Machine Translation. Desmond Elliott, Stella Frank, Khalil Sima’an, and Lucia Specia. 2016. Multi30K: Multilingual EnglishGerman Image Description. In ACL: Workshop on Vision and Language. Fartash Faghri, David Fleet, Jamie Kiros, and Sanja Fidler. 2017. VSE++: Improving VisualSemantic Embeddings with Hard Negatives. In arXiv:1707.05612. Goran Glavas, Ivan Vulic, and Simone Paolo Ponzetto. 2017. If Sentences Could See: Investigating Visual Information for Semantic Textual Similarity. In IWCS. Jiuxiang Gu, Jianfei Cai, Shafiq Joty, Li Niu, and Gang Wang. 2018. Look, Imagine and Match: Improving Textual-Visual Cross-Modal Retrieval with Generative Models. In CVPR. Mareike Hartmann and Anders Sogaard. 2017. Limitations of Cross-Lingual Learning from Image Search. In arXiv:1709.05914. 932 Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. 2016. Deep Residual Learning for Image Recognition. In CVPR. Felix Hill, Kyunghyun Cho, and Anna Korhonen. 2016. Learning distributed representations of sentences from unlabelled data. In NAACL. Felix Hill and Anna Korhonen. 2014. Learning Abstract Concept Embeddings from Multi-Modal Data: Since You Probably Can’t See What I Mean. In EMNLP. Felix Hill, Roi Reichart, and Anna Korhonen. 2015. SimLex-999: Evaluating Semantic Models with (Genuine) Similarity Estimation. Computational Linguistics 41(4). Sepp Hochreiter and Jurgen Schmidhuber. 1997. Long Short-Term Memory. Neural Computation 9(8). Po-Sen Huang, Chong Wang, Sitao Huang, Dengyong Zhou, and Li Deng. 2018a. Towards Neural Phrasebased Machine Translation. In ICLR. Yan Huang, Qi Wu, and Liang Wang. 2018b. Learning Semantic Concepts and Order for Image and Sentence Matching. In CVPR. Jinbae Im and Sungzoon Cho. 2017. Distance-based self-attention network for natural language inference. In arXiv:1712.02047. Hakan Inan, Khashayar Khosravi, and Richard Socher. 2017. Tying Word Vectors and Word Classifiers: A Loss Framework for Languag Modeling. In ICLR. Armand Joulin, Edouard Grave, Piotr Bojanowski, and Tomas Mikolov. 2017. Bag of Tricks for Efficient Text Classification. In EACL. Armand Joulin, Laurens van der Maaten, Allan Jabri, and Nicolas Vasilache. 2016. Learning Visual Features from Large Weakly Supervised Data. In ECCV. Douwe Kiela. 2016. MMFeat: A Toolkit for Extracting Multi-Modal Features. In ACL: System Demonstrations. Douwe Kiela and Leon Bottou. 2014. Learning Image Embeddings using Convolutional Neural Networks for Improved Multi-Modal Semantics. In EMNLP. Douwe Kiela, Alexis Conneau, Allan Jabri, and Maximilian Nickel. 2017. Learning Visually Grounded Sentence Representations. In arXiv:1707.06320. Douwe Kiela, Edouard Grave, Armand Joulin, and Tomas Mikolov. 2018. Efficient Large-Scale MultiModal Classification. In AAAI. Douwe Kiela, Felix Hill, Anna Korhonen, and Stephen Clark. 2014. Improving Multi-Modal Representations Using Image Dispersion: Why Less is Sometimes More. In ACL. Douwe Kiela, Laura Rimell, Ivan Vulic, and Stephen Clark. 2015a. Exploiting Image Generality for Lexical Entailment Detection. In ACL. Douwe Kiela, Anita Vero, and Stephen Clark. 2016. Comparing data sources and architectures for deep visual representation learning in semantics. In EMNLP. Douwe Kiela, Ivan Vulic, and Stephen Clark. 2015b. Visual Bilingual Lexicon Induction with Transferred ConvNet Features. In EMNLP. Diederik Kingma and Jimmy Ba. 2015. Adam: A Method for Stochastic Optimization. In ICLR. Ryan Kiros, Ruslan Salakhutdinov, and Richard Zemel. 2015a. Unifying Visual-Semantic Embeddings with Multimodal Neural Language Models. In arXiv:1411.2539. Ryan Kiros, Yukun Zhu, Ruslan R Salakhutdinov, Richard Zemel, Raquel Urtasun, Antonio Torralba, and Sanja Fidler. 2015b. Skip-thought vectors. In NIPS. Angeliki Lazaridou, Nghia The Pham, and Marco Baroni. 2015. Combining Language and Vision with a Multimodal Skip-gram Model. In ACL. Kuang-Huei Lee, Xi Chen, Gang Hua, Houdong Hu, and Xiaodong He. 2018. Stacked Cross Attention for Image-Text Matching. In arXiv:1803.08024. Tsung-Yi Lin, Michael Maire, Serge Belongie, Lubomir Bourdev, Ross Girshick, James Hays, Pietro Perona, Deva Ramanan, Lawrence Zitnick, and Piotr Doll´ar. 2014. Microsoft COCO: Common Objects in Context. In ECCV. Lajanugen Logeswaran and Honglak Lee. 2018. An efficient framework for learning sentence representations. In ICLR. Junhua Mao, Jiajing Xu, Yushi Jing, and Alan Yuille. 2016. Training and Evaluating Multimodal Word Embeddings with Large-scale Web Annotated Images. In NIPS. Marco Marelli, Stefano Menini, Marco Baroni, Luisa Bentivogli, Raffaella bernardi, and Roberto Zamparelli. 2014. A SICK cure for the evaluation of compositional distributional semantic models. In LREC. Bryan McCann, James Bradbury, Caiming Xiong, and Richard Socher. 2017. Learned in translation: Contextualized word vectors. In NIPS. Oren Melamud, Jacob Goldberger, and Ido Dagan. 2016. context2vec: Learning Generic Context Embedding with Bidirectional LSTM. In CoNLL. Jeffrey Pennington, Richard Socher, and Christopher Manning. 2014. GloVe: Global Vectors for Word Representation. In EMNLP. 933 Gabriel Pereyra, George Tucker, Jan Chorowski, Łukasz Kaiser, and Geoffrey Hinton. 2017. Regularizing Neural Networks by Penalizing Confident Output Distributions. In ICLR Workshop. Matthew Peters, Waleed Ammar, Chandra Bhagavatula, and Russell Power. 2017. Semi-supervised sequence tagging with bidirectional language models. In ACL. Matthew E Peters, Mark Neumann, Mohit Iyyer, Matt Gardner, Christopher Clark, Kenton Lee, and Luke Zettlemoyer. 2018. Deep contextualized word representations. In arXiv:1802.05365. Ofir Press and Lior Wolf. 2017. Using the Output Embedding to Improve Language Models. In EACL. Marc’Aurelio Ranzato, Sumit Chopra, Michael Auli, and Wojciech Zaremba. 2016. Sequence Level Training with Recurrent Neural Networks. In ICLR. Stanislau Semeniuta, Aliaksei Severyn, and Erhardt Barth. 2016. Recurrent Dropout without Memory Loss. In COLING. Rico Sennrich, Barry Haddow, and Alexandra Birch. 2016. Neural Machine Translation of Rare Words with Subword Units. In ACL. Tao Shen, Tianyi Zhou, Guodong Long, Jing Jiang, Sen Wang, and Chengqi Zhang. 2018. Reinforced Self-Attention Network: a Hybrid of Hard and Soft Attention for Sequence Modeling. In arXiv:1801.10296. Ekaterina Shutova, Douwe Kiela, and Jean Maillard. 2016. Black Holes and White Rabbits: Metaphor Identification with Visual Features. In NAACL. Carina Silberer, Vittorio Ferrari, and Mirella Lapata. 2017. Visually Grounded Meaning Representations. PAMI . Ilya Sutskever, Oriol Vinyals, and Quoc Le. 2015. Sequence to Sequence Learning with Neural Networks. In NIPS. Christian Szegedy, Vincent Vanhoucke, Sergey Ioffe, Jonathon Shlens, and Zbigniew Wojna. 2016. Rethinking the Inception Architecture for Computer Vision. In CVPR. Kai Sheng Tai, Richard Socher, and Christopher D Manning. 2015. Improved Semantic Representations From Tree-Structured Long Short-Term Memory Networks. In ACL. Steffen Thoma, Achim Rettinger, and Fabian Both. 2017. Towards Holistic Concept Representations: Embedding Relational Knowledge, Visual Attributes, and Distributional Word Semantics. In ISWC. Ivan Vulic, Douwe Kiela, Stephen Clark, and MarieFrancine Moens. 2016. Multi-Modal Representations for Improved Bilingual Lexicon Learning. In ACL. Jiang Wang, Yang Song, Thomas Leung, Chuck Rosenberg, Jingbin Wang, James Philbin, Bo Chen, and Ying Wu. 2014. Learning Fine-grained Image Similarity with Deep Ranking. In CVPR. Shaonan Wang, Jiajun Zhang, and Chengqing Zong. 2018. Learning Multimodal Word Representation via Dynamic Fusion Methods. In AAAI. Adina Williams, Nikita Nangia, and Samuel Bowman. 2017. A Broad-Coverage Challenge Corpus for Sentence Understanding through Inference. In arXiv:1704.05426. Sam Wiseman and Alexander M. Rush. 2016. Sequence-to-Sequence Learning as Beam-Search Optimization. In EMNLP. ´Eloi Zablocki, Benjamin Piwowarski, Laure Soulier, and Patrick Gallinari. 2018. Learning Multi-Modal Word Representation Grounded in Visual Context. In AAAI. Xiang Zhang, Junbo Zhao, and Yann LeCun. 2015. Character-level Convolutional Networks for Text Classification. In NIPS.
2018
85
Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Long Papers), pages 934–945 Melbourne, Australia, July 15 - 20, 2018. c⃝2018 Association for Computational Linguistics 934 What Action Causes This? Towards Naive Physical Action-Effect Prediction Qiaozi Gao† Shaohua Yang† Joyce Y. Chai† Lucy Vanderwende‡ †Department of Computer Science and Engineering, Michigan State University ‡Microsoft Research, Redmond, Washington {gaoqiaoz, yangshao, jchai}@msu.edu lucy [email protected] Abstract Despite recent advances in knowledge representation, automated reasoning, and machine learning, artificial agents still lack the ability to understand basic actioneffect relations regarding the physical world, for example, the action of cutting a cucumber most likely leads to the state where the cucumber is broken apart into smaller pieces. If artificial agents (e.g., robots) ever become our partners in joint tasks, it is critical to empower them with such action-effect understanding so that they can reason about the state of the world and plan for actions. Towards this goal, this paper introduces a new task on naive physical action-effect prediction, which addresses the relations between concrete actions (expressed in the form of verbnoun pairs) and their effects on the state of the physical world as depicted by images. We collected a dataset for this task and developed an approach that harnesses web image data through distant supervision to facilitate learning for action-effect prediction. Our empirical results have shown that web data can be used to complement a small number of seed examples (e.g., three examples for each action) for model learning. This opens up possibilities for agents to learn physical action-effect relations for tasks at hand through communication with humans with a few examples. 1 Introduction Causation in the physical world has long been a central discussion to philosophers who study casual reasoning and explanation (Ducasse, 1926; Gopnik et al., 2007), to mathematicians or computer scientists who apply computational approaches to model cause-effect prediction (Pearl et al., 2009), and to domain experts (e.g., medical doctors) who attempt to understand the underlying cause-effect relations (e.g., disease and symptoms) for their particular inquires. Apart from this wide range of topics, this paper investigates a specific kind of causation, the very basic causal relations between a concrete action (expressed in the form of a verb-noun pair such as “cut-cucumber”) and the change of the physical state caused by this action. We call such relations naive physical action-effect relations. For example, given an image as shown in Figure 1, we would have no problem predicting what actions can cause the state of the world depicted in the image, e.g., slicing an apple will likely lead to the state. On the other hand, given a statement “slice an apple”, it would not be hard for us to imagine what state change may happen to the apple. We can make such action-effect prediction because we have developed an understanding of this kind of basic action-effect relations at a very young age (Baillargeon, 2004). What happens to machines? Will artificial agents be able to make the same kind of predictions? The answer is not yet. Despite tremendous progress in knowledge representation, automated reasoning, and machine learning, artificial agents still lack the understanding of naive causal relations regarding the physical world. This is one of the bottlenecks in machine intelligence. If artificial agents ever become capable of working with humans as partners, they will need to have this kind of physical action-effect understanding to help them reason, learn, and perform actions. To address this problem, this paper introduces a new task on naive physical action-effect prediction. This task supports both cause predic935 Figure 1: Images showing the effects of “slice an apple”. tion: given an image which describes a state of the world, identify the most likely action (in the form of a verb-noun pair, from a set of candidates) that can result in that state; and effect prediction: given an action in the form of a verb-noun pair, identify images (from a set of candidates) that depicts the most likely effects on the state of the world caused by that action. Note that there could be different ways to formulate this problem, for example, both causes and effects are in the form of language or in the form of images/videos. Here we intentionally frame the action as a language expression (i.e., a verb-noun pair) and the effect as depicted in an image in order to make a connection between language and perception. This connection is important for physical agents that not only can perceive and act, but also can communicate with humans in language. As a first step, we collected a dataset of 140 verb-noun pairs. Each verb-noun pair is annotated with possible effects described in language and depicted in images (where language descriptions and image descriptions are collected separately). We have developed an approach that applies distant supervision to harness web data for bootstrapping action-effect prediction models. Our empirical results have shown that, using a simple bootstrapping strategy, our approach can combine the noisy web data with a small number of seed examples to improve action-effect prediction. In addition, for a new verb-noun pair, our approach can infer its effect descriptions and predict action-effect relations only based on 3 image examples. The contributions of this paper are three folds. First, it introduces a new task on physical actioneffect prediction, a first step towards an understanding of causal relations between physical actions and the state of the physical world. Such ability is central to robots which not only perceive from the environment, but also act to the environment through planning. To our knowledge, there is no prior work that attempts to connect actions (in language) and effects (in images) in this nature. Second, our approach harnesses the large amount of image data available on the web with minimum supervision. It has shown that physical action-effect models can be learned through a combination of a few annotated examples and a large amount of un-annotated web data. This opens up the possibility for humans to teach robots new tasks through language communication with a small number of examples. Third, we have created a dataset for this task, which is available to the community 1. Our bootstrapping approach can serve as a baseline for future work on this topic. In the following sections, we first describe our data collection effort, then introduce the bootstrapping approach for action-effect prediction, and finally present results from our experiments. 2 Related Work In the NLP community, there has been extensive work that models cause-effect relations from text (Cole et al., 2005; Do et al., 2011; Yang and Mao, 2014). Most of these previous studies address high-level causal relations between events, for example, “the collapse of the housing bubble” causes the effect of “stock prices to fall” (Sharp et al., 2016). They do not concern the kind of naive physical action-effect relations in this paper. There is also an increasing amount of effort on capturing commonsense knowledge, for example, through knowledge base population. Except for few (Yatskar et al., 2016) that acquires knowledge from images, most of the previous effort apply information extraction techniques to extract facts from a large amount of web data (Dredze et al., 2010; Rajani and Mooney, 2016). DBPedia (Lehmann et al., 2015), Freebase (Bollacker et al., 2008), and YAGO (Suchanek et al., 2007) knowledge bases contain millions of facts about the world such as people and places. However, they do not contain basic cause-effect knowledge related to concrete actions and their effects to the world. Recent work started looking into phys1This dataset is available at http://lair.cse.msu. edu/lair/projects/actioneffect.html 936 ical causality of action verbs (Gao et al., 2016) and other physical properties of verbs (Forbes and Choi, 2017; Zellers and Choi, 2017; Chao et al., 2015). But they do not address action-effect prediction. The idea of modeling object physical state change has also been studied in the computer vision community (Fire and Zhu, 2016). Computational models have been developed to infer object states from observations and to further predict future state changes (Zhou and Berg, 2016; Wu et al., 2016, 2017). The action recognition task can be treated as detecting the transformation on object states (Fathi and Rehg, 2013; Yang et al., 2013; Wang et al., 2016). However these previous works only focus on the visual presentation of motion effects. Recent years have seen an increasing amount of work integrating language and vision, for example, visual question answering (Antol et al., 2015; Fukui et al., 2016; Lu et al., 2016), image description generation (Xu et al., 2015; Vinyals et al., 2015), and grounding language to perception (Yang et al., 2016; Roy, 2005; Tellex et al., 2011; Misra et al., 2017). While many approaches require a large amount of training data, recent works have developed zero/few shot learning for language and vision (Mukherjee and Hospedales, 2016; Xu et al., 2016, 2017a,b; Tsai and Salakhutdinov, 2017). Different from these previous works, this paper introduces a new task that connects language with vision for physical action-effect prediction. In the robotics community, an important task is to enable robots to follow human natural language instructions. Previous works (She et al., 2014; Misra et al., 2015; She and Chai, 2016, 2017) explicitly model verb semantics as desired goal states and thus linking natural language commands with underlying planning systems for action planning and execution. However, these studies were carried out either in a simulated world or in a carefully curated simple environment within the limitation of the robot’s manipulation system. And they only focus on a very limited set of domain specific actions which often only involve the change of locations. In this work, we study a set of open-domain physical actions and a variety of effects perceived from the environment (i.e., from images). 3 Action-Effect Data Collection We collected a dataset to support the investigation on physical action-effect prediction. This dataset consists of actions expressed in the form of verbnoun pairs, effects of actions described in language, and effects of actions depicted in images. Note that, as we would like to have a wide range of possible effects, language data and image data are collected separately. Actions (verb-noun pairs). We selected 40 nouns that represent everyday life objects, most of them are from the COCO dataset (Lin et al., 2014), with a combination of food, kitchen ware, furniture, indoor objects, and outdoor objects. We also identified top 3000 most frequently used verbs from Google Syntactic N-gram dataset (Goldberg and Orwant, 2013) (Verbargs set). And we extracted top frequent verb-noun pairs containing a verb from the top 3000 verbs and a noun in the 40 nouns which hold a dobj (i.e., direct object) dependency relation. This resulted in 6573 candidate verbnoun pairs. As changes to an object can occur at various dimensions (e.g., size, color, location, attachment, etc.), we manually selected a subset of verb-noun pairs based on the following criteria: (1) changes to the objects are visible (as opposed to other types such as temperature change, etc.); and (2) changes reflect one particular dimension as opposed to multiple dimensions (as entailed by high-level actions such as “cook a meal”, which correspond to multiple dimensions of change and can be further decomposed into basic actions). As a result, we created a subset of 140 verb-noun pairs (containing 62 unique verbs and 39 unique nouns) for our investigation. Effects Described in Language. The basic knowledge about physical action-effect is so fundamental and shared among humans. It is often presupposed in our communication and not explicitly stated. Thus, it is difficult to extract naive action-effect relations from the existing textual data (e.g., web). This kind of knowledge is also not readily available in commonsense knowledge bases such as ConceptNet (Speer and Havasi, 2012). To overcome this problem, we applied crowd-sourcing (Amazon Mechanical Turk) and collected a dataset of language descriptions describing effects for each of the 140 verb-noun pairs. The workers were shown a verb-noun pair, and were asked to use their own words and imag937 Action Effect Text ignite paper The paper is on fire. soak shirt The shirt is thoroughly wet. fry potato The potatoes become crisp and golden. stain shirt There is a visible mark on the shirt. Table 1: Example action and effect text from our collected data. inations to describe what changes might occur to the corresponding object as a result of the action. Each verb-noun pair was annotated by 10 different annotators, which has led to a total of 1400 effect descriptions. Table 1 shows some examples of collected effect descriptions. These effect language descriptions allow us to derive seed effect knowledge in a symbolic form. Effects Depicted in Images. For each action, three students searched the web and collected a set of images depicting potential effects. Specifically, given a verb-noun pair, each of the three students was asked to collect at least 5 positive images and 5 negative images. Positive images are those deemed to capture the resulting world state of the action. And negative images are those deemed to capture some state of the related object (i.e., the nouns in the verb-noun pairs), but are not the resulting state of the corresponding action. Then, each student was also asked to provide positive or negative labels for the images collected by the other two students. As a result each image has three positive/negative labels. We only keep the images whose labels are agreed by all three students. In total, the dataset contains 4163 images. On average, each action has 15 positive images, and 15 negative images. Figure 2 shows several examples of positive images and negative images of the action peel-orange. The positive images show an orange in a peeled state, while the negative images show oranges in different states (orange as a whole, orange slices, orange juice, etc.). 4 Action-Effect Prediction Action-effect prediction is to connect actions (as causes) to the effects of actions. Specifically, given an image which depicts a state of the world, our task is to predict what concrete actions could cause the state of the world. This task is different from traditional action recognition as the underlying actions (e.g., human body posture/movement) are not captured by the images. In this regard, it is also different from image description generation. Figure 2: Positive images (top row) and negative images (bottom row) of the action peel-orange. We frame the problem as a few-shot learning task, by only providing a few human-labelled images for each action at the training stage. Given the very limited training data, we attempt to make use of web-search images. Web search has been adopted by previous computer vision studies to acquire training data (Fergus et al., 2005; Kennedy et al., 2006; Berg et al., 2010; Otani et al., 2016). Compared with human annotations, web-search comes at a much lower cost, but with a trade-off of poor data quality. To address this issue, we apply a bootstrapping approach that aims to handle data with noisy labels. The first question is what search terms should be used for image search. There are two options. The first option is to directly use the action terms (i.e., verb-noun pairs) to search images and the downloaded web images are referred to as action web images. As desired images should be depicting effects of an action, terms describing effects become a natural choice. The second option is to use the key phrases extracted from language effect descriptions to search the web. The downloaded web images are referred to as effect web images. 4.1 Extracting Effect Phrases from Language Data We first apply chunking (shallow parsing) using the SENNA software (Collobert et al., 2011) to break an effect description into phrases such as noun phrases (NP), verb phrases (VP), prepositional phrases (PP), adjectives (ADJP), adverbs (ADVP), etc. After some examination, we found that most of the effect descriptions follow simple syntactic patterns. For a verb-noun pair, around 80% of its effect descriptions start with the same noun as the subject. In an effect description, the 938 Example patterns Example Effect Phrases (bold) extracted from effect descriptions VP with a verb ∈{be, become, turn, get} The ship is destroyed. VP + PRT The wall is knocked off. VP + ADVP The door swings forward. ADJP The window would begin to get clean. PP + NP The eggs are divided into whites and yolks. Table 2: Example patterns that are used to extract effect phrases (bold) from sample sentences. change of state associated with the noun is mainly captured by some key phrases. For example, an adjective phrase usually describes a physical state; verbs like be, become, turn, get often indicate a description of change of the state. Based on these observations, we defined a set of patterns to identify phrases that describe physical states of an object. In total 1997 effect phrases were extracted from the language data. Table 2 shows some example patterns and example effect phrases that are extracted. 4.2 Downloading Web Images The purpose of querying search engine is to retrieve images of objects in certain effect states. To form image searching keywords, the effect phrases are concatenated with the corresponding noun phrases, for example, “apple + into thin pieces”. The image search results are downloaded and used as supplementary training data for the action-effect prediction models. However, web images can be noisy. First of all, not all of the automatically extracted effect phrases describe visible state of objects. Even if a phrase represents visible object states, the retrieved results may not be relevant. Figure 3 shows some example image search results using queries describing the object name “book”, and describing the object state such as “book is on fire”, “book is set aflame”. These state phrases were used by human annotators to describe the effect of the action “burn a book”. We can see that the images returned from the query “book is set aflame” are not depicting the physical effect state of “burn a book”. Therefore, it’s important to identify images with relevant effect states to train the model. To do that, we applied a bootstrapping method to handle the noisy web images as described in Section 4.3. For an action (i.e., a verb-noun pair), it has multiple corresponding effect phrases, and all of their image search results are treated as training images for this action. Since both the human annotated image data (Section 3) and the web-search image data were obtained from Internet search engines, they may book book is on fire book is set aflame Figure 3: Examples of image search results. have duplicates. As part of the annotated images are used as test data to evaluate the models, it is important to remove duplicates. We designed a simple method to remove any images from the web-search image set that has a duplicate in the human annotated set. We first embed all images into feature vectors using pre-trained CNNs. For each web-search image, we calculate its cosine similarity score with each of the annotated images. And we simply remove the web images that have a score larger than 0.95. 4.3 Models We formulate the action-effect prediction task as a multi-class classification problem. Given an image, the model will output a probability distribution q over the candidate actions (i.e., verb-noun pairs) that can potentially cause the effect depicted in the image. Specifically for model training, we are given a set of human annotated seeding image data {x, t} and a set of web-search image data {x′, t′}. Here x and x′ are the images (depicting effect states), and t and t′ are their classification targets (i.e., actions that cause the effects). Each target vector is the observed image label, t ∈{0, 1}C, P i ti = 1, and C is the number of classes (i.e., actions). The human annotated targets t can be trusted. But the targets of web-search images t′ are usually very noisy. Bootstrapping method has been shown to be an effective method to handle noisy labelled data (Rosenberg et al., 2005; Whitney and Sarkar, 2012; Reed et al., 2014). The objective of the 939 0.1 Web Search Images Seeding Images Prediction CNN Image Classifier Bootstrapping Cross-Entropy Loss 0.2 … 0.5 Action 1 Action 2 … Action C Cross-Entropy Loss 0.3 Prediction 0.2 … 0.2 Action 1 Action 2 … Action C Figure 4: Architecture for the action-effect prediction model with bootstrapping. cross-entropy loss is defined as follows: L(t, q) = C X i=1 ti log (qi), (1) where q are the predicted class probabilities, and C is the number of classes. To handle the noisy labels in the web-search data {x′, t′}, we adopt a bootstrapping objective following Reed’s work (Reed et al., 2014): L(t′, q) = C X i=1 [βt′ i + (1 −β)zi] log (qi), (2) where β ∈[0, 1] is a model parameter to be assigned, z is the one-hot vector of the prediction q, zi = 1, if i = argmax qk, k = 1 . . . C. The model architecture is shown in Figure 4. After each training batch, the current model will be used to make predictions q on images in the next batch. And the target probabilities is calculated as a linear combination of the current predictions q and the observed noisy labels t′. The idea behind this bootstrapping strategy is to ensure the consistency of the model’s predictions. By first initializing the model on the seeding image data, the bootstrapping approach allows the model to trust more on the web images that are consistent with the seeding data. 4.4 Evaluation We evaluate the models on the action-effect prediction task. Given an image that illustrates a state of the world, the goal is to predict what action could cause that state. Given an action in the form of a verb-noun pair, the goal is to identify images that depict the most likely effects on the state of the world caused by that action. For each of the 140 verb-noun pairs, we use 10% of the human annotated images as the seeding image data for training, and use 30% for development and the rest 60% for test. The seeding image data set contains 408 images. On average, each verb-noun pair has less than 3 seeding images (including positive images and negative images). The development set contains 1252 images. The test set contains 2503 images. The model parameters were selected based on the performance on the development set. As a given image may not be relevant to any effect, we add a background class to refer to images where effects are not caused by any action in the space of actions. So the total of classes for our evaluation model is 141. For each verb-noun pair and each of the effect phrases, around 40 images were downloaded from the Bing image search engine and used as candidate training examples. In total we have 6653 action web images and 59575 effect web images. Methods for Comparison All the methods compared are based on one neural network structure. We use ResNet (He et al., 2016) pre-trained on ImageNet (Deng et al., 2009) to extract image features. The extracted image features are fed to a fully connected layer with rectified linear units and then to a softmax layer to make predictions. More specifically, we compare the following configurations: (1) BS+Seed+Act+Eff. The bootstrapping approach trained on the seeding images, the action web images, and the effect web images. During the training stage, the model was first trained on the seeding image data using vanilla cross-entropy objective (Equation 1). Then it was further trained on a combination of the seeding image data and web-search data using the bootstrapping objective (Equation 2). In the experiments we set β = 0.3. (2) BS+Seed+Act. The bootstrapping approach trained in the same fashion as (1). The only difference is that this method does not use the effect web images. (3) Seed+Act+Eff. A baseline method trained on a combination of the seeding images, the web action images, and the web effect images, using the vanilla cross-entropy objective. (4) Seed+Act. A baseline method trained on a combination of the seeding images and the action web images, using the vanilla cross-entropy objective. 940 Top Action Predictions Top Effect Descriptions Top Action Predictions Top Effect Descriptions bake potato peel potato boil potato fry potato potato crispy potato is crushed eggs get beaten potato browned wrap book tear book fold paper shave hair book is ripped paper become creased book into smaller pieces meat is being prepped peel carrot cut wood chop carrot grate carrot carrot into little sections tree into pieces carrot into tiny pieces wood is being chopped stain paper close drawer squeeze bottle crack bottle bottle is pressed together meat is exposed paper around itself drawer is pushed back chop onion cut onion slice onion background onion is being cut onion in banana is made banana is removed chop onion cook onion grate potato background onion is heated onion into small pieces onion into multiple pieces onion is chopped Figure 5: Several example test images and their predicted actions and predicted effect descriptions. The actions in bold are ground-truth labels. MAP Top 1 Top 5 Top 20 BS+Seed+Act+Eff 0.290 0.414 0.750 0.921 BS+Seed+Act 0.252 0.414 0.721 0.893 Seed+Act+Eff 0.247 0.314 0.679 0.886 Seed+Act 0.241 0.371 0.650 0.814 Seed 0.182 0.329 0.629 0.807 Table 3: Results for the action-effect prediction task (given an action, rank all the candidate images). MAP Top 1 Top 5 Top 20 BS+Seed+Act+Eff 0.660 0.523 0.843 0.954 BS+Seed+Act 0.642 0.508 0.802 0.924 Seed+Act+Eff 0.289 0.176 0.398 0.625 Seed+Act 0.481 0.301 0.724 0.926 Seed 0.634 0.520 0.765 0.892 Table 4: Results for the action-effect prediction task (given an image, rank all the actions). (5) Seed. A baseline method that was only trained on the seeding image data, using the vanilla crossentropy objective. Evaluation Results We apply the trained classification model to all of the test images. Based on the matrix of prediction scores, we can evaluate action-effect prediction from two angles: (1) given an action class, rank all the candidate images; (2) given an image, rank all the candidate action classes. Table 3 and 4 show the results for these two angels respectively. We report both mean average precision (MAP) and top prediction accuracy. Overall, BS+Seed+Act+Eff gives the best performance. By comparing the bootstrap approach with baseline approaches (i.e., BS+Seed+Act+Eff vs. Seed+Act+Eff, and BS+Seed+Act vs. Seed+Act), the bootstrapping approaches clearly outperforms their counterparts, demonstrating its ability in handling noisy web data. Comparing BS+Seed+Act+Eff with BS+Seed+Act, we can see that BS+Seed+Act+Eff performs better. This indicates the use of effect descriptions can bring more relevant images to train better models for action-effect prediction. In Table 4, the poor performance of Seed+Act+Eff and Seed+Act shows that it is risky to fully rely on the noisy web search results. These two methods had trouble in distinguishing the background class from the rest. We further trained another multi-class classifier with web effect images, using their corresponding effect phrases as class labels. Given a test image, we apply this new classifier to predict the effect descriptions of this image. Figure 5 shows some example images, their predicted actions based on our bootstrapping approach and their predicted effect phrases based on the new classifier. These examples also demonstrate another advantage of incorporating seed effect knowledge from language data: it provides state descriptions that can be used to better explain the perceived state. Such explanation can be crucial in human-agent communication for action planning and reasoning. 5 Generalizing Effect Knowledge to New Verb-Noun Pairs In real applications, it is very likely that we do not have the effect knowledge (i.e., language effect descriptions) for every verb-noun pair. And annotat941 Action Effect slice apple into many small pieces LSTM LSTM Cosine Embedding Loss Figure 6: Architecture of the action-effect embedding model. ing effect knowledge using language (as shown in Section 3) can be very expensive. In this section, we describe how to potentially generalize seed effect knowledge to new verb-noun pairs through an embedding model. 5.1 Action-Effect Embedding Model The structure of our model is shown in Figure 6. It is composed of two sub-networks: one for verbnoun pairs (i.e., action) and the other one for effect phrases (i.e, effect). The action or effect is fed into an LSTM encoder and then to two fully-connected layers. The output is an action embedding vc and effect embedding ve. The networks are trained by minimizing the following cosine embedding loss function: L(vc, ve) = ( 1 −s(vc, ve), if (c, e) ∈T max(0, s(vc, ve)), if (c, e) /∈T s(·, ·) is the cosine similarity between vectors. T is a collection of action-effect pairs. Suppose c is an input for action and e is an input for effect, this loss function will learn an action and effect semantic space that maximizes the similarities between c and e if they have an action-effect relation (i.e., (c, e) ∈T). During training, the negative actioneffect pairs (i.e., (c, e) /∈T) are randomly sampled from data. In the experiments, the negative sampling ratio is set to 25. That is, for each positive action-effect pair, 25 negative pairs are created through random sampling. At the inference step, given an unseen verbnoun pair, we embed it into the action and effect semantic space. Its embedding vector will be used to calculate similarities with all the embedding vectors of the candidate effect phrases. MAP Top 1 Top 5 BS+Seed+Act+Eff 0.529 0.643 0.928 BS+Seed+Act+pEff 0.507 0.642 0.893 BS+Seed+Act 0.435 0.643 0.964 Seed 0.369 0.678 0.786 Table 5: Results for the action-effect prediction task (given an action, rank all the candidate images). MAP Top 1 Top 5 BS+Seed+Act+Eff 0.733 0.574 0.947 BS+Seed+Act+pEff 0.729 0.551 0.961 BS+Seed+Act 0.724 0.557 0.933 Seed 0.705 0.557 0.898 Table 6: Results for the action-effect prediction task (given an image, rank all the actions). 5.2 Evaluation We divided the 140 verb-noun pairs into 70% training set (98 verb-noun pairs), 10% development set (14) and 20% test set (28). For the actioneffect embedding model, we use pre-trained GloVe word embeddings (Pennington et al., 2014) as input to the LSTM. The embedding model was trained using the language effect data corresponding to the training verb-noun pairs, and then it was applied to predict effect phrases for the unseen verb-noun pairs in the test set. For each unseen verb-noun pair, we collected its top five predicted effect phrases. Each predicted effect phrase was then used as query keywords to download web effect images. This set of web images are referred to as pEff and will be used in training the actioneffect prediction model. For each of the 28 test (i.e., new) verb-noun pairs, we use the same ratio 10% (about 3 examples) of the human annotated images as the seeding images, which were combined with downloaded web images to train the prediction model. The remaining 30% and 60% are used as the development set, and the test set. We compare the following different configurations: (1) BS+Seed+Act+pEff. The bootstrapping approach trained on the seeding images, the action web images, and the web images downloaded using the predicted effect phrases. (2) BS+Seed+Act+Eff. The bootstrapping approach trained on the seeding images, the action web images, and the effect web images (downloaded using ground-truth effect phrases). (3) BS+Seed+Act. The bootstrapping approach trained on the seeding images and the action web 942 Action Text Predicted Effect Text chop carrot carrot into sandwiches, carrot is sliced, carrot is cut thinly, carrot into different pieces, carrot is divided ignite paper paper is being charred , paper is being burned, paper is set, paper is being destroyed, paper is lit mash potato potato into chunks, potato into sandwiches, potato into slices, potato is chewed, potato into smaller pieces Table 7: Example predicted effect phrases for new verb-noun pairs. Unseen verbs and nouns are shown in bold. images. (4) Seed. A baseline only trained on the seeding images. Table 5 and 6 show the results for the action-effect prediction task for unseen verbnoun pairs. From the results we can see that BS+Seed+Act+pEff achieves close performance compared with BS+Seed+Act+Eff, which uses human annotated effect phrases. Although in most cases, BS+Seed+Act+pEff outperforms the baseline, which seems to point to the possibility that semantic embedding space can be employed to extend effect knowledge to new verb-noun pairs. However, the current results are not conclusive partly due to the small testing set. More in-depth evaluation is needed in the future. Table 7 shows top predicted effect phrases for several new verb-noun pairs. After analyzing the action-effect prediction results we notice that generalizing the effect knowledge to a verb-noun pair that contains an unseen verb tends to be more difficult than generalizing to a verb-noun pair that contains an unseen noun. Among the 28 test verbnoun pairs, 12 of them contain unseen verbs and known nouns, 7 of them contain unseen nouns and known verbs. For the task of ranking images given an action, the mean average precision is 0.447 for the unseen verb cases and 0.584 for the unseen noun cases. Although not conclusive, this might indicate that, verbs tend to capture more information about the effect states of the world than nouns. 6 Discussion and Conclusion When robots operate in the physical world, they not only need to perceive the world, but also need to act to the world. They need to understand the current state, to map their goals to the world state, and to plan for actions that can lead to the goals. All of these point to the importance of the ability to understand causal relations between actions and the state of the world. To address this issue, this paper introduces a new task on action-effect prediction. Particularly, we focus on modeling the connection between an action (a verb-noun pair) and its effect as illustrated in an image and treat natural language effect descriptions as side knowledge to help acquiring web image data and bootstrap training. Our current model is very simple and performance is yet to be improved. We plan to apply more advanced approaches in the future, for example, attention models that jointly capture actions, image states, and effect descriptions. We also plan to incorporate action-effect prediction to humanrobot collaboration, for example, to bridge the gap of commonsense knowledge about the physical world between humans and robots. This paper presents an initial investigation on action-effect prediction. There are many challenges and unknowns, from problem formulation to knowledge representation; from learning and inference algorithms to methods and metrics for evaluations. Nevertheless, we hope this work can motivate more research in this area, enabling physical action-effect reasoning, towards agents which can perceive, act, and communicate with humans in the physical world. Acknowledgments This work was supported by the National Science Foundation (IIS-1617682) and the DARPA XAI program under a subcontract from UCLA (N66001-17-2-4029). The authors would like to thank the anonymous reviewers for their valuable comments and suggestions. References Stanislaw Antol, Aishwarya Agrawal, Jiasen Lu, Margaret Mitchell, Dhruv Batra, C Lawrence Zitnick, and Devi Parikh. 2015. Vqa: Visual question answering. In Proceedings of the IEEE International Conference on Computer Vision, pages 2425–2433. 943 Ren´ee Baillargeon. 2004. Infants’ physical world. Current directions in psychological science, 13(3):89–94. Tamara L Berg, Alexander C Berg, and Jonathan Shih. 2010. Automatic attribute discovery and characterization from noisy web data. In European Conference on Computer Vision, pages 663–676. Springer. Kurt Bollacker, Colin Evans, Praveen Paritosh, Tim Sturge, and Jamie Taylor. 2008. Freebase: a collaboratively created graph database for structuring human knowledge. In Proceedings of the 2008 ACM SIGMOD international conference on Management of data, pages 1247–1250. ACM. Yu-Wei Chao, Zhan Wang, Rada Mihalcea, and Jia Deng. 2015. Mining semantic affordances of visual object categories. In Computer Vision and Pattern Recognition (CVPR), 2015 IEEE Conference on, pages 4259–4267. IEEE. Stephen V Cole, Matthew D Royal, Marco G Valtorta, Michael N Huhns, and John B Bowles. 2005. A lightweight tool for automatically extracting causal relationships from text. In SoutheastCon, 2006. Proceedings of the IEEE, pages 125–129. IEEE. Ronan Collobert, Jason Weston, L´eon Bottou, Michael Karlen, Koray Kavukcuoglu, and Pavel Kuksa. 2011. Natural language processing (almost) from scratch. Journal of Machine Learning Research, 12(Aug):2493–2537. Jia Deng, Wei Dong, Richard Socher, Li-Jia Li, Kai Li, and Li Fei-Fei. 2009. Imagenet: A large-scale hierarchical image database. In Computer Vision and Pattern Recognition, 2009. CVPR 2009. IEEE Conference on, pages 248–255. IEEE. Quang Xuan Do, Yee Seng Chan, and Dan Roth. 2011. Minimally supervised event causality identification. In Proceedings of the Conference on Empirical Methods in Natural Language Processing, pages 294–303. Association for Computational Linguistics. Mark Dredze, Paul McNamee, Delip Rao, Adam Gerber, and Tim Finin. 2010. Entity disambiguation for knowledge base population. In Proceedings of the 23rd International Conference on Computational Linguistics, pages 277–285. Association for Computational Linguistics. Curt J Ducasse. 1926. On the nature and the observability of the causal relation. The Journal of Philosophy, 23(3):57–68. Alireza Fathi and James M Rehg. 2013. Modeling actions through state changes. In Computer Vision and Pattern Recognition (CVPR), 2013 IEEE Conference on, pages 2579–2586. IEEE. Robert Fergus, Li Fei-Fei, Pietro Perona, and Andrew Zisserman. 2005. Learning object categories from google’s image search. In Computer Vision, 2005. ICCV 2005. Tenth IEEE International Conference on, volume 2, pages 1816–1823. IEEE. Amy Fire and Song-Chun Zhu. 2016. Learning perceptual causality from video. ACM Transactions on Intelligent Systems and Technology (TIST), 7(2):23. Maxwell Forbes and Yejin Choi. 2017. Verb physics: Relative physical knowledge of actions and objects. In Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), volume 1, pages 266–276. Akira Fukui, Dong Huk Park, Daylen Yang, Anna Rohrbach, Trevor Darrell, and Marcus Rohrbach. 2016. Multimodal compact bilinear pooling for visual question answering and visual grounding. arXiv preprint arXiv:1606.01847. Qiaozi Gao, Malcolm Doering, Shaohua Yang, and Joyce Y Chai. 2016. Physical causality of action verbs in grounded language understanding. In Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics (ACL), volume 1, pages 1814–1824. Yoav Goldberg and Jon Orwant. 2013. A dataset of syntactic-ngrams over time from a very large corpus of english books. In Second Joint Conference on Lexical and Computational Semantics (* SEM), Volume 1: Proceedings of the Main Conference and the Shared Task: Semantic Textual Similarity, volume 1, pages 241–247. Alison Gopnik, Laura Schulz, and Laura Elizabeth Schulz. 2007. Causal learning: Psychology, philosophy, and computation. Oxford University Press. Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. 2016. Deep residual learning for image recognition. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 770–778. Lyndon S Kennedy, Shih-Fu Chang, and Igor V Kozintsev. 2006. To search or to label?: predicting the performance of search-based automatic image classifiers. In Proceedings of the 8th ACM international workshop on Multimedia information retrieval, pages 249–258. ACM. Jens Lehmann, Robert Isele, Max Jakob, Anja Jentzsch, Dimitris Kontokostas, Pablo N Mendes, Sebastian Hellmann, Mohamed Morsey, Patrick Van Kleef, S¨oren Auer, et al. 2015. Dbpedia–a large-scale, multilingual knowledge base extracted from wikipedia. Semantic Web, 6(2):167–195. Tsung-Yi Lin, Michael Maire, Serge Belongie, James Hays, Pietro Perona, Deva Ramanan, Piotr Doll´ar, and C Lawrence Zitnick. 2014. Microsoft coco: Common objects in context. In European conference on computer vision, pages 740–755. Springer. 944 Jiasen Lu, Jianwei Yang, Dhruv Batra, and Devi Parikh. 2016. Hierarchical question-image coattention for visual question answering. In Advances In Neural Information Processing Systems, pages 289–297. Dipendra Misra, John Langford, and Yoav Artzi. 2017. Mapping instructions and visual observations to actions with reinforcement learning. In Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing, pages 1015–1026. Association for Computational Linguistics. Dipendra Kumar Misra, Kejia Tao, Percy Liang, and Ashutosh Saxena. 2015. Environment-driven lexicon induction for high-level instructions. In Proceedings of the 53rd Annual Meeting of the Association for Computational Linguistics and the 7th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), volume 1, pages 992–1002. Tanmoy Mukherjee and Timothy Hospedales. 2016. Gaussian visual-linguistic embedding for zero-shot recognition. In Proceedings of the 2016 Conference on Empirical Methods in Natural Language Processing, pages 912–918. Mayu Otani, Yuta Nakashima, Esa Rahtu, Janne Heikkil¨a, and Naokazu Yokoya. 2016. Learning joint representations of videos and sentences with web image search. In European Conference on Computer Vision, pages 651–667. Springer. Judea Pearl et al. 2009. Causal inference in statistics: An overview. Statistics surveys, 3:96–146. Jeffrey Pennington, Richard Socher, and Christopher Manning. 2014. Glove: Global vectors for word representation. In Proceedings of the 2014 conference on empirical methods in natural language processing (EMNLP), pages 1532–1543. Nazneen Fatema Rajani and Raymond J Mooney. 2016. Combining supervised and unsupervised ensembles for knowledge base population. In Proceedings of the 2016 Conference on Empirical Methods in Natural Language Processing (EMNLP-16). Scott Reed, Honglak Lee, Dragomir Anguelov, Christian Szegedy, Dumitru Erhan, and Andrew Rabinovich. 2014. Training deep neural networks on noisy labels with bootstrapping. arXiv preprint arXiv:1412.6596. Chuck Rosenberg, Martial Hebert, and Henry Schneiderman. 2005. Semi-supervised self-training of object detection models. In Application of Computer Vision, 2005. WACV/MOTIONS’05 Volume 1. Seventh IEEE Workshops on, volume 1, pages 29–36. IEEE. Deb Roy. 2005. Grounding words in perception and action: computational insights. Trends in cognitive sciences, 9(8):389–396. Rebecca Sharp, Mihai Surdeanu, Peter Jansen, Peter Clark, and Michael Hammond. 2016. Creating causal embeddings for question answering with minimal supervision. In Proceedings of the 2016 Conference on Empirical Methods in Natural Language Processing (EMNLP), pages 138–148. Lanbo She and Joyce Chai. 2016. Incremental acquisition of verb hypothesis space towards physical world interaction. In Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), volume 1, pages 108–117. Lanbo She and Joyce Chai. 2017. Interactive learning of grounded verb semantics towards human-robot communication. In Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), volume 1, pages 1634–1644. Lanbo She, Shaohua Yang, Yu Cheng, Yunyi Jia, Joyce Chai, and Ning Xi. 2014. Back to the blocks world: Learning new actions through situated human-robot dialogue. In Proceedings of the 15th Annual Meeting of the Special Interest Group on Discourse and Dialogue (SIGDIAL), pages 89–97. Robert Speer and Catherine Havasi. 2012. Representing general relational knowledge in conceptnet 5. In LREC, pages 3679–3686. Fabian M Suchanek, Gjergji Kasneci, and Gerhard Weikum. 2007. Yago: a core of semantic knowledge. In Proceedings of the 16th international conference on World Wide Web, pages 697–706. ACM. Stefanie Tellex, Thomas Kollar, Steven Dickerson, Matthew R Walter, Ashis Gopal Banerjee, Seth J Teller, and Nicholas Roy. 2011. Understanding natural language commands for robotic navigation and mobile manipulation. In AAAI, volume 1, page 2. Yao-Hung Hubert Tsai and Ruslan Salakhutdinov. 2017. Improving one-shot learning through fusing side information. arXiv preprint arXiv:1710.08347. Oriol Vinyals, Alexander Toshev, Samy Bengio, and Dumitru Erhan. 2015. Show and tell: A neural image caption generator. In Computer Vision and Pattern Recognition (CVPR), 2015 IEEE Conference on, pages 3156–3164. IEEE. Xiaolong Wang, Ali Farhadi, and Abhinav Gupta. 2016. Actions˜ transformations. In Proceedings of the IEEE conference on Computer Vision and Pattern Recognition, pages 2658–2667. Max Whitney and Anoop Sarkar. 2012. Bootstrapping via graph propagation. In Proceedings of the 50th Annual Meeting of the Association for Computational Linguistics: Long Papers-Volume 1, pages 620–628. Association for Computational Linguistics. 945 Jiajun Wu, Joseph J Lim, Hongyi Zhang, Joshua B Tenenbaum, and William T Freeman. 2016. Physics 101: Learning physical object properties from unlabeled videos. In BMVC, volume 2, page 7. Jiajun Wu, Erika Lu, Pushmeet Kohli, Bill Freeman, and Josh Tenenbaum. 2017. Learning to see physics via visual de-animation. In Advances in Neural Information Processing Systems, pages 152–163. Kelvin Xu, Jimmy Ba, Ryan Kiros, Kyunghyun Cho, Aaron Courville, Ruslan Salakhudinov, Rich Zemel, and Yoshua Bengio. 2015. Show, attend and tell: Neural image caption generation with visual attention. In International Conference on Machine Learning, pages 2048–2057. Xun Xu, Timothy Hospedales, and Shaogang Gong. 2017a. Transductive zero-shot action recognition by word-vector embedding. International Journal of Computer Vision, 123(3):309–333. Xun Xu, Timothy M Hospedales, and Shaogang Gong. 2016. Multi-task zero-shot action recognition with prioritised data augmentation. In European Conference on Computer Vision, pages 343–359. Springer. Zhongwen Xu, Linchao Zhu, and Yi Yang. 2017b. Few-shot object recognition from machine-labeled web images. In The IEEE Conference on Computer Vision and Pattern Recognition (CVPR). Shaohua Yang, Qiaozi Gao, Changsong Liu, Caiming Xiong, Song-Chun Zhu, and Joyce Y Chai. 2016. Grounded semantic role labeling. In Proceedings of the 2016 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 149–159. Xuefeng Yang and Kezhi Mao. 2014. Multi level causal relation identification using extended features. Expert Systems with Applications, 41(16):7171–7181. Yezhou Yang, Cornelia Ferm¨uller, and Yiannis Aloimonos. 2013. Detection of manipulation action consequences (mac). In Computer Vision and Pattern Recognition (CVPR), 2013 IEEE Conference on, pages 2563–2570. IEEE. Mark Yatskar, Vicente Ordonez, and Ali Farhadi. 2016. Stating the obvious: Extracting visual common sense knowledge. In Proceedings of NAACL-HLT, pages 193–198. Rowan Zellers and Yejin Choi. 2017. Zero-shot activity recognition with verb attribute induction. In Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing (EMNLP). Yipin Zhou and Tamara L Berg. 2016. Learning temporal transformations from time-lapse videos. In European Conference on Computer Vision, pages 262–277. Springer.
2018
86
Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Long Papers), pages 946–956 Melbourne, Australia, July 15 - 20, 2018. c⃝2018 Association for Computational Linguistics 946 Transformation Networks for Target-Oriented Sentiment Classification∗ Xin Li1, Lidong Bing2, Wai Lam1 and Bei Shi1 1Department of Systems Engineering and Engineering Management The Chinese University of Hong Kong, Hong Kong 2Tencent AI Lab, Shenzhen, China {lixin,wlam,bshi}@se.cuhk.edu.hk [email protected] Abstract Target-oriented sentiment classification aims at classifying sentiment polarities over individual opinion targets in a sentence. RNN with attention seems a good fit for the characteristics of this task, and indeed it achieves the state-of-the-art performance. After re-examining the drawbacks of attention mechanism and the obstacles that block CNN to perform well in this classification task, we propose a new model to overcome these issues. Instead of attention, our model employs a CNN layer to extract salient features from the transformed word representations originated from a bi-directional RNN layer. Between the two layers, we propose a component to generate target-specific representations of words in the sentence, meanwhile incorporate a mechanism for preserving the original contextual information from the RNN layer. Experiments show that our model achieves a new state-of-the-art performance on a few benchmarks.1 1 Introduction Target-oriented (also mentioned as “target-level” or “aspect-level” in some works) sentiment classification aims to determine sentiment polarities over “opinion targets” that explicitly appear in the sentences (Liu, 2012). For example, in the sentence “I am pleased with the fast log on, and the long battery life”, the user mentions two targets ∗The work was done when Xin Li was an intern at Tencent AI Lab. This project is substantially supported by a grant from the Research Grant Council of the Hong Kong Special Administrative Region, China (Project Code: 14203414). 1Our code is open-source and available at https:// github.com/lixin4ever/TNet “log on” and “better life”, and expresses positive sentiments over them. The task is usually formulated as predicting a sentiment category for a (target, sentence) pair. Recurrent Neural Networks (RNNs) with attention mechanism, firstly proposed in machine translation (Bahdanau et al., 2014), is the most commonly-used technique for this task. For example, Wang et al. (2016); Tang et al. (2016b); Yang et al. (2017); Liu and Zhang (2017); Ma et al. (2017) and Chen et al. (2017) employ attention to measure the semantic relatedness between each context word and the target, and then use the induced attention scores to aggregate contextual features for prediction. In these works, the attention weight based combination of word-level features for classification may introduce noise and downgrade the prediction accuracy. For example, in “This dish is my favorite and I always get it and never get tired of it.”, these approaches tend to involve irrelevant words such as “never” and “tired” when they highlight the opinion modifier “favorite”. To some extent, this drawback is rooted in the attention mechanism, as also observed in machine translation (Luong et al., 2015) and image captioning (Xu et al., 2015). Another observation is that the sentiment of a target is usually determined by key phrases such as “is my favorite”. By this token, Convolutional Neural Networks (CNNs)—whose capability for extracting the informative n-gram features (also called “active local features”) as sentence representations has been verified in (Kim, 2014; Johnson and Zhang, 2015)— should be a suitable model for this classification problem. However, CNN likely fails in cases where a sentence expresses different sentiments over multiple targets, such as “great food but the service was dreadful!”. One reason is that CNN cannot fully explore the target information as done by RNN-based meth947 ods (Tang et al., 2016a).2 Moreover, it is hard for vanilla CNN to differentiate opinion words of multiple targets. Precisely, multiple active local features holding different sentiments (e.g., “great food” and “service was dreadful”) may be captured for a single target, thus it will hinder the prediction. We propose a new architecture, named TargetSpecific Transformation Networks (TNet), to solve the above issues in the task of target sentiment classification. TNet firstly encodes the context information into word embeddings and generates the contextualized word representations with LSTMs. To integrate the target information into the word representations, TNet introduces a novel Target-Specific Transformation (TST) component for generating the target-specific word representations. Contrary to the previous attention-based approaches which apply the same target representation to determine the attention scores of individual context words, TST firstly generates different representations of the target conditioned on individual context words, then it consolidates each context word with its tailor-made target representation to obtain the transformed word representation. Considering the context word “long” and the target “battery life” in the above example, TST firstly measures the associations between “long” and individual target words. Then it uses the association scores to generate the target representation conditioned on “long”. After that, TST transforms the representation of “long” into its target-specific version with the new target representation. Note that “long” could also indicate a negative sentiment (say for “startup time”), and the above TST is able to differentiate them. As the context information carried by the representations from the LSTM layer will be lost after the non-linear TST, we design a contextpreserving mechanism to contextualize the generated target-specific word representations. Such mechanism also allows deep transformation structure to learn abstract features3. To help the CNN feature extractor locate sentiment indicators more accurately, we adopt a proximity strategy to scale the input of convolutional layer with positional relevance between a word and the target. 2One method could be concatenating the target representation with each word representation, but the effect as shown in (Wang et al., 2016) is limited. 3Abstract features usually refer to the features ultimately useful for the task (Bengio et al., 2013; LeCun et al., 2015). In summary, our contributions are as follows: • TNet adapts CNN to handle target-level sentiment classification, and its performance dominates the state-of-the-art models on benchmark datasets. • A novel Target-Specific Transformation component is proposed to better integrate target information into the word representations. • A context-preserving mechanism is designed to forward the context information into a deep transformation architecture, thus, the model can learn more abstract contextualized word features from deeper networks. 2 Model Description Given a target-sentence pair (wτ, w), where wτ = {wτ 1, wτ 2, ..., wτ m} is a sub-sequence of w = {w1, w2, ..., wn}, and the corresponding word embeddings xτ = {xτ 1, xτ 2, ..., xτ m} and x = {x1, x2, ..., xn}, the aim of target sentiment classification is to predict the sentiment polarity y ∈ {P, N, O} of the sentence w over the target wτ, where P, N and O denote “positive”, “negative” and “neutral” sentiments respectively. The architecture of the proposed TargetSpecific Transformation Networks (TNet) is shown in Fig. 1. The bottom layer is a BiLSTM which transforms the input x = {x1, x2, ..., xn} ∈ Rn×dimw into the contextualized word representations h(0) = {h(0) 1 , h(0) 2 , ..., h(0) n } ∈Rn×2dimh (i.e. hidden states of BiLSTM), where dimw and dimh denote the dimensions of the word embeddings and the hidden representations respectively. The middle part, the core part of our TNet, consists of L Context-Preserving Transformation (CPT) layers. The CPT layer incorporates the target information into the word representations via a novel Target-Specific Transformation (TST) component. CPT also contains a contextpreserving mechanism, resembling identity mapping (He et al., 2016a,b) and highway connection (Srivastava et al., 2015a,b), allows preserving the context information and learning more abstract word-level features using a deep network. The top most part is a position-aware convolutional layer which first encodes positional relevance between a word and a target, and then extracts informative features for classification. 2.1 Bi-directional LSTM Layer As observed in Lai et al. (2015), combining contextual information with word embeddings is an 948 Figure 1: Architecture of TNet. effective way to represent a word in convolutionbased architectures. TNet also employs a BiLSTM to accumulate the context information for each word of the input sentence, i.e., the bottom part in Fig. 1. For simplicity and space issue, we denote the operation of an LSTM unit on xi as LSTM(xi). Thus, the contextualized word representation h(0) i ∈R2dimh is obtained as follows: h(0) i = [−−−−→ LSTM(xi); ←−−−− LSTM(xi)], i ∈[1, n]. (1) 2.2 Context-Preserving Transformation The above word-level representation has not considered the target information yet. Traditional attention-based approaches keep the word-level features static and aggregate them with weights as the final sentence representation. In contrast, as shown in the middle part in Fig. 1, we introduce multiple CPT layers and the detail of a single CPT is shown in Fig. 2. In each CPT layer, a tailor-made TST component that aims at better consolidating word representation and target representation is proposed. Moreover, we design a context-preserving mechanism enabling the learning of target-specific word representations in a deep neural architecture. 2.2.1 Target-Specific Transformation TST component is depicted with the TST block in Fig. 2. The first task of TST is to generate the representation of the target. Previous methods (Chen Figure 2: Details of a CPT module. et al., 2017; Liu and Zhang, 2017) average the embeddings of the target words as the target representation. This strategy may be inappropriate in some cases because different target words usually do not contribute equally. For example, in the target “amd turin processor”, the word “processor” is more important than “amd” and “turin”, because the sentiment is usually conveyed over the phrase head, i.e.,“processor”, but seldom over modifiers (such as brand name “amd”). Ma et al. (2017) attempted to overcome this issue by measuring the importance score between each target word representation and the averaged sentence vector. However, it may be ineffective for sentences expressing multiple sentiments (e.g., “Air has higher resolution but the fonts are small.”), because taking the average tends to neutralize different sentiments. We propose to dynamically compute the importance of target words based on each sentence word rather than the whole sentence. We first employ another BiLSTM to obtain the target word representations hτ ∈Rm×2dimh: hτ j = [−−−−→ LSTM(xτ j ); ←−−−− LSTM(xτ j )], j ∈[1, m]. (2) Then, we dynamically associate them with each word wi in the sentence to tailor-make target representation rτ i at the time step i: rτ i = m X j=1 hτ j ∗F(h(l) i , hτ j ), (3) where the function F measures the relatedness between the j-th target word representation hτ j and 949 the i-th word-level representation h(l) i : F(h(l) i , hτ j ) = exp (h(l)⊤ i hτ j ) Pm k=1 exp (h(l)⊤ i hτ k) . (4) Finally, the concatenation of rτ i and h(l) i is fed into a fully-connected layer to obtain the i-th targetspecific word representation ˜hi (l): ˜h(l) i = g(W τ[h(l) i : rτ i ] + bτ), (5) where g(∗) is a non-linear activation function and “:” denotes vector concatenation. W τ and bτ are the weights of the layer. 2.2.2 Context-Preserving Mechanism After the non-linear TST (see Eq. 5), the context information captured with contextualized representations from the BiLSTM layer will be lost since the mean and the variance of the features within the feature vector will be changed. To take advantage of the context information, which has been proved to be useful in (Lai et al., 2015), we investigate two strategies: Lossless Forwarding (LF) and Adaptive Scaling (AS), to pass the context information to each following layer, as depicted by the block “LF/AS” in Fig. 2. Accordingly, the model variants are named TNet-LF and TNet-AS. Lossless Forwarding. This strategy preserves context information by directly feeding the features before the transformation to the next layer. Specifically, the input h(l+1) i of the (l + 1)-th CPT layer is formulated as: h(l+1) i = h(l) i + ˜h(l) i , i ∈[1, n], l ∈[0, L], (6) where h(l) i is the input of the l-th layer and ˜h(l) i is the output of TST in this layer. We unfold the recursive form of Eq. 6 as follows: h(l+1) i = h(0) i +TST(h(0) i )+· · ·+TST(h(l) i ). (7) Here, we denote ˜h(l) i as TST(h(l) i ). From Eq. 7, we can see that the output of each layer will contain the contextualized word representations (i.e., h(0) i ), thus, the context information is encoded into the transformed features. We call this strategy “Lossless Forwarding” because the contextualized representations and the transformed representations (i.e., TST(h(l) i )) are kept unchanged during the feature combination. Adaptive Scaling. Lossless Forwarding introduces the context information by directly adding back the contextualized features to the transformed features, which raises a question: Can the weights of the input and the transformed features be adjusted dynamically? With this motivation, we propose another strategy, named “Adaptive Scaling”. Similar to the gate mechanism in RNN variants (Jozefowicz et al., 2015), Adaptive Scaling introduces a gating function to control the passed proportions of the transformed features and the input features. The gate t(l) as follows: t(l) i = σ(Wtransh(l) i + btrans), (8) where t(l) i is the gate for the i-th input of the l-th CPT layer, and σ is the sigmoid activation function. Then we perform convex combination of h(l) i and ˜h(l) i based on the gate: h(l+1) i = t(l) i ⊙˜h(l) i + (1 −t(l) i ) ⊙h(l) i . (9) Here, ⊙denotes element-wise multiplication. The non-recursive form of this equation is as follows (for clarity, we ignore the subscripts): h(l+1) = [ lY k=0 (1 −t(k))] ⊙h(0) +[t(0) lY k=1 (1 −t(k))] ⊙TST(h(0)) + · · · +t(l−1)(1 −t(l)) ⊙TST(h(l−1)) + t(l) ⊙TST(h(l)). Thus, the context information is integrated in each upper layer and the proportions of the contextualized representations and the transformed representations are controlled by the computed gates in different transformation layers. 2.3 Convolutional Feature Extractor Recall that the second issue that blocks CNN to perform well is that vanilla CNN may associate a target with unrelated general opinion words which are frequently used as modifiers for different targets across domains. For example, “service” in “Great food but the service is dreadful” may be associated with both “great” and “dreadful”. To solve it, we adopt a proximity strategy, which is observed effective in (Chen et al., 2017; Li and Lam, 2017). The idea is a closer opinion word is more likely to be the actual modifier of the target. 950 # Positive # Negative # Neutral LAPTOP Train 980 858 454 Test 340 128 171 REST Train 2159 800 632 Test 730 195 196 TWITTER Train 1567 1563 3127 Test 174 174 346 Table 1: Statistics of datasets. Specifically, we first calculate the position relevance vi between the i-th word and the target4: vi =      1 −(k+m−i) C i < k + m 1 −i−k C k + m ≤i ≤n 0 i > n (10) where k is the index of the first target word, C is a pre-specified constant, and m is the length of the target wτ. Then, we use v to help CNN locate the correct opinion w.r.t. the given target: ˆh(l) i = h(l) i ∗vi, i ∈[1, n], l ∈[1, L]. (11) Based on Eq. 10 and Eq. 11, the words close to the target will be highlighted and those far away will be downgraded. v is also applied on the intermediate output to introduce the position information into each CPT layer. Then we feed the weighted h(L) to the convolutional layer, i.e., the top-most layer in Fig. 1, to generate the feature map c ∈Rn−s+1 as follows: ci = ReLU(w⊤ convh(L) i:i+s−1 + bconv), (12) where h(L) i:i+s−1 ∈Rs·dimh is the concatenated vector of ˆh(L) i , · · · , ˆh(L) i+s−1, and s is the kernel size. wconv ∈Rs·dimh and bconv ∈R are learnable weights of the convolutional kernel. To capture the most informative features, we apply max pooling (Kim, 2014) and obtain the sentence representation z ∈Rnk by employing nk kernels: z = [max(c1), · · · , max(cnk)]⊤. (13) Finally, we pass z to a fully connected layer for sentiment prediction: p(y|wτ, w) = Softmax(Wfz + bf). (14) where Wf and bf are learnable parameters. 4As we perform sentence padding, it is possible that the index i is larger than the actual length n of the sentence. 3 Experiments 3.1 Experimental Setup As shown in Table 1, we evaluate the proposed TNet on three benchmark datasets: LAPTOP and REST are from SemEval ABSA challenge (Pontiki et al., 2014), containing user reviews in laptop domain and restaurant domain respectively. We also remove a few examples having the “conflict label” as done in (Chen et al., 2017); TWITTER is built by Dong et al. (2014), containing twitter posts. All tokens are lowercased without removal of stop words, symbols or digits, and sentences are zero-padded to the length of the longest sentence in the dataset. Evaluation metrics are Accuracy and Macro-Averaged F1 where the latter is more appropriate for datasets with unbalanced classes. We also conduct pairwise t-test on both Accuracy and Macro-Averaged F1 to verify if the improvements over the compared models are reliable. TNet is compared with the following methods. • SVM (Kiritchenko et al., 2014): It is a traditional support vector machine based model with extensive feature engineering; • AdaRNN (Dong et al., 2014): It learns the sentence representation toward target for sentiment prediction via semantic composition over dependency tree; • AE-LSTM, and ATAE-LSTM (Wang et al., 2016): AE-LSTM is a simple LSTM model incorporating the target embedding as input, while ATAE-LSTM extends AE-LSTM with attention; • IAN (Ma et al., 2017): IAN employs two LSTMs to learn the representations of the context and the target phrase interactively; • CNN-ASP: It is a CNN-based model implemented by us which directly concatenates target representation to each word embedding; • TD-LSTM (Tang et al., 2016a): It employs two LSTMs to model the left and right contexts of the target separately, then performs predictions based on concatenated context representations; • MemNet (Tang et al., 2016b): It applies attention mechanism over the word embeddings multiple times and predicts sentiments 951 Hyper-params TNet-LF TNet-AS LAPTOP REST TWITTER LAPTOP REST TWITTER dimw 300 300 dimh 50 50 dropout rates (plstm, psent) (0.3, 0.3) (0.3, 0.3) L 2 2 batch size 64 25 64 64 32 64 s 3 3 nk 50 100 C 40.0 30.0 Table 2: Settings of hyper-parameters. based on the top-most sentence representations; • BILSTM-ATT-G (Liu and Zhang, 2017): It models left and right contexts using two attention-based LSTMs and introduces gates to measure the importance of left context, right context, and the entire sentence for the prediction; • RAM (Chen et al., 2017): RAM is a multilayer architecture where each layer consists of attention-based aggregation of word features and a GRU cell to learn the sentence representation. We run the released codes of TD-LSTM and BILSTM-ATT-G to generate results, since their papers only reported results on TWITTER. We also rerun MemNet on our datasets and evaluate it with both accuracy and Macro-Averaged F1.5 We use pre-trained GloVe vectors (Pennington et al., 2014) to initialize the word embeddings and the dimension is 300 (i.e., dimw = 300). For out-of-vocabulary words, we randomly sample their embeddings from the uniform distribution U(−0.25, 0.25), as done in (Kim, 2014). We only use one convolutional kernel size because it was observed that CNN with single optimal kernel size is comparable with CNN having multiple kernel sizes on small datasets (Zhang and Wallace, 2017). To alleviate overfitting, we apply dropout on the input word embeddings of the LSTM and the ultimate sentence representation z. All weight matrices are initialized with the uniform distribution U(−0.01, 0.01) and the biases are initialized 5The codes of TD-LSTM/MemNet and BILSTM-ATTG are available at: http://ir.hit.edu.cn/˜dytang and http://leoncrashcode.github.io. Note that MemNet was only evaluated with accuracy. as zeros. The training objective is cross-entropy, and Adam (Kingma and Ba, 2015) is adopted as the optimizer by following the learning rate and the decay rates in the original paper. The hyper-parameters of TNet-LF and TNetAS are listed in Table 2. Specifically, all hyperparameters are tuned on 20% randomly held-out training data and the hyper-parameter collection producing the highest accuracy score is used for testing. Our model has comparable number of parameters compared to traditional LSTM-based models as we reuse parameters in the transformation layers and BiLSTM.6 3.2 Main Results As shown in Table 3, both TNet-LF and TNet-AS consistently achieve the best performance on all datasets, which verifies the efficacy of our whole TNet model. Moreover, TNet can perform well for different kinds of user generated content, such as product reviews with relatively formal sentences in LAPTOP and REST, and tweets with more ungrammatical sentences in TWITTER. The reason is the CNN-based feature extractor arms TNet with more power to extract accurate features from ungrammatical sentences. Indeed, we can also observe that another CNN-based baseline, i.e., CNNASP implemented by us, also obtains good results on TWITTER. On the other hand, the performance of those comparison methods is mostly unstable. For the tweet in TWITTER, the competitive BILSTMATT-G and RAM cannot perform as effective as they do for the reviews in LAPTOP and REST, due to the fact that they are heavily rooted in LSTMs and the ungrammatical sentences hinder their ca6All experiments are conducted on a single NVIDIA GTX 1080. The prediction cost of a sentence is about 2 ms. 952 Models LAPTOP REST TWITTER ACC Macro-F1 ACC Macro-F1 ACC Macro-F1 Baselines SVM 70.49♮ 80.16♮ 63.40∗ 63.30∗ AdaRNN 66.30♮ 65.90♮ AE-LSTM 68.90♮ 76.60♮ ATAE-LSTM 68.70♮ 77.20♮ IAN 72.10♮ 78.60♮ CNN-ASP 72.46 65.31 77.82 65.11 73.27 71.77 TD-LSTM 71.83 68.43 78.00 66.73 66.62 64.01 MemNet 70.33 64.09 78.16 65.83 68.50 66.91 BILSTM-ATT-G 74.37 69.90 80.38 70.78 72.70 70.84 RAM 75.01 70.51 79.79 68.86 71.88 70.33 CPT Alternatives LSTM-ATT-CNN 73.37 68.03 78.95 68.71 70.09 67.68 LSTM-FC-CNN-LF 75.59 70.60 80.41 70.23 73.70 72.82 LSTM-FC-CNN-AS 75.78 70.72 80.23 70.06 74.28 72.60 Ablated TNet TNet w/o transformation 73.30 68.25 78.90 65.86 72.10 70.57 TNet w/o context 73.91 68.87 80.07 69.01 74.51 73.05 TNet-LF w/o position 75.13 70.63 79.86 69.69 73.83 72.49 TNet-AS w/o position 75.27 70.03 79.79 69.78 73.84 72.47 TNet variants TNet-LF 76.01†,‡ 71.47†,‡ 80.79†,‡ 70.84‡ 74.68†,‡ 73.36†,‡ TNet-AS 76.54†,‡ 71.75†,‡ 80.69†,‡ 71.27†,‡ 74.97†,‡ 73.60†,‡ Table 3: Experimental results (%). The results with symbol“♮” are retrieved from the original papers, and those starred (∗) one are from Dong et al. (2014). The marker † refers to p-value < 0.01 when comparing with BILSTM-ATT-G, while the marker ‡ refers to p-value < 0.01 when comparing with RAM. pability in capturing the context features. Another difficulty caused by the ungrammatical sentences is that the dependency parsing might be errorprone, which will affect those methods such as AdaRNN using dependency information. From the above observations and analysis, some takeaway message for the task of target sentiment classification could be: • LSTM-based models relying on sequential information can perform well for formal sentences by capturing more useful context features; • For ungrammatical text, CNN-based models may have some advantages because CNN aims to extract the most informative n-gram features and is thus less sensitive to informal texts without strong sequential patterns. 3.3 Performance of Ablated TNet To investigate the impact of each component such as deep transformation, context-preserving mechanism, and positional relevance, we perform comparison between the full TNet models and its ablations (the third group in Table 3). After removing the deep transformation (i.e., the techniques introduced in Section 2.2), both TNet-LF and TNetAS are reduced to TNet w/o transformation (where position relevance is kept), and their results in both accuracy and F1 measure are incomparable with those of TNet. It shows that the integration of target information into the word-level representations is crucial for good performance. Comparing the results of TNet and TNet w/o context (where TST and position relevance are kept), we observe that the performance of TNet w/o context drops significantly on LAPTOP and REST7, while on TWITTER, TNet w/o context performs very competitive (p-values with TNetLF and TNet-AS are 0.066 and 0.053 respectively for Accuracy). Again, we could attribute this phenomenon to the ungrammatical user generated content of twitter, because the contextpreserving component becomes less important for such data. TNet w/o context performs consistently better than TNet w/o transformation, which verifies the efficacy of the target specific transformation (TST), before applying context-preserving. As for the position information, we conduct statistical t-test between TNet-LF/AS and TNetLF/AS w/o position together with performance comparison. All of the produced p-values are less than 0.05, suggesting that the improvements brought in by position information are significant. 7Without specification, the significance level is set to 0.05. 953 3.4 CPT versus Alternatives The next interesting question is what if we replace the transformation module (i.e., the CPT layers in Fig.1) of TNet with other commonly-used components? We investigate two alternatives: attention mechanism and fully-connected (FC) layer, resulting in three pipelines as shown in the second group of Table 3 (position relevance is kept for them). LSTM-ATT-CNN applies attention as the alternative8, and it does not need the contextpreserving mechanism. It performs unexceptionally worse than the TNet variants. We are surprised that LSTM-ATT-CNN is even worse than TNet w/o transformation (a pipeline simply removing the transformation module) on TWITTER. More concretely, applying attention results in negative effect on TWITTER, which is consistent with the observation that all those attention-based state-of-the-art methods (i.e., TD-LSTM, MemNet, BILSTM-ATT-G, and RAM) cannot perform well on TWITTER. LSTM-FC-CNN-LF and LSTM-FC-CNN-AS are built by applying FC layer to replace TST and keeping the context-preserving mechanism (i.e., LF and AS). Specifically, the concatenation of word representation and the averaged target vector is fed to the FC layer to obtain targetspecific features. Note that LSTM-FC-CNNLF/AS are equivalent to TNet-LF/AS when processing single-word targets (see Eq. 3). They obtain competitive results on all datasets: comparable with or better than the state-of-the-art methods. The TNet variants can still outperform LSTMFC-CNN-LF/AS with significant gaps, e.g., on LAPTOP and REST, the accuracy gaps between TNet-LF and LSTM-FC-CNN-LF are 0.42% (p < 0.03) and 0.38% (p < 0.04) respectively. 3.5 Impact of CPT Layer Number As our TNet involves multiple CPT layers, we investigate the effect of the layer number L. Specifically, we conduct experiments on the held-out training data of LAPTOP and vary L from 2 to 10, increased by 2. The cases L=1 and L=15 are also included. The results are illustrated in Figure 3. We can see that both TNet-LF and TNetAS achieve the best results when L=2. While increasing L, the performance is basically becoming worse. For large L, the performance of TNet-AS 8We tried different attention mechanisms and report the best one here, namely, dot attention (Luong et al., 2015). 1 2 4 6 8 10 15 55 60 65 70 75 Accuracy (%) TNet-LF TNet-AS 1 2 4 6 8 10 15 55 60 65 70 75 Macro-F1 (%) TNet-LF TNet-AS Figure 3: Effect of L. generally becomes more sensitive, it is probably because AS involves extra parameters (see Eq 9) that increase the training difficulty. 3.6 Case Study Table 4 shows some sample cases. The input targets are wrapped in the brackets with true labels given as subscripts. The notations P, N and O in the table represent positive, negative and neutral respectively. For each sentence, we underline the target with a particular color, and the text of its corresponding most informative n-gram feature9 captured by TNet-AS (TNet-LF captures very similar features) is in the same color (so color printing is preferred). For example, for the target “resolution” in the first sentence, the captured feature is “Air has higher”. Note that as discussed above, the CNN layer of TNet captures such features with the size-three kernels, so that the features are trigrams. Each of the last features of the second and seventh sentences contains a padding token, which is not shown. Our TNet variants can predict target sentiment more accurately than RAM and BILSTM-ATT-G in the transitional sentences such as the first sentence by capturing correct trigram features. For the third sentence, its second and third most informative trigrams are “100% . PAD” and “’ s not”, being used together with “features make up”, our models can make correct predictions. Moreover, TNet can still make correct prediction when the explicit opinion is target-specific. For example, 9For each convolutional filter, only one n-gram feature in the feature map will be kept after the max pooling. Among those from different filters, the n-gram with the highest frequency will be regarded as the most informative n-gram w.r.t. the given target. 954 Sentence BILSTM-ATT-G RAM TNet-LF TNet-AS 1. Air has higher [resolution]P but the [fonts]N are small . (N, N) (N, N) (P, N) (P, N) 2. Great [food]P but the [service]N is dreadful . (P, N) (P, N) (P, N) (P, N) 3. Sure it ’ s not light and slim but the [features]P make up for it 100% . N N P P 4. Not only did they have amazing , [sandwiches]P , [soup]P , [pizza]P etc , but their [homemade sorbets]P are out of this world ! (P, O, O, P) (P, P, O, P) (P, P, P, P) (P, P, P, P) 5. [startup times]N are incredibly long : over two minutes . P P N N 6. I am pleased with the fast [log on]P , speedy [wifi connection]P and the long [battery life]P ( > 6 hrs ) . (P, P, P) (P, P, P) (P, P, P) (P, P, P) 7. The [staff]N should be a bit more friendly . P P P P Table 4: Example predictions, color printing is preferred. The input targets are wrapped in brackets with the true labels given as subscripts.  indicates incorrect prediction. “long” in the fifth sentence is negative for “startup time”, while it could be positive for other targets such as “battery life” in the sixth sentence. The sentiment of target-specific opinion word is conditioned on the given target. Our TNet variants, armed with the word-level feature transformation w.r.t. the target, is capable of handling such case. We also find that all these models cannot give correct prediction for the last sentence, a commonly used subjunctive style. In this case, the difficulty of prediction does not come from the detection of explicit opinion words but the inference based on implicit semantics, which is still quite challenging for neural network models. 4 Related Work Apart from sentence level sentiment classification (Kim, 2014; Shi et al., 2018), aspect/target level sentiment classification is also an important research topic in the field of sentiment analysis. The early methods mostly adopted supervised learning approach with extensive hand-coded features (Blair-Goldensohn et al., 2008; Titov and McDonald, 2008; Yu et al., 2011; Jiang et al., 2011; Kiritchenko et al., 2014; Wagner et al., 2014; Vo and Zhang, 2015), and they fail to model the semantic relatedness between a target and its context which is critical for target sentiment analysis. Dong et al. (2014) incorporate the target information into the feature learning using dependency trees. As observed in previous works, the performance heavily relies on the quality of dependency parsing. Tang et al. (2016a) propose to split the context into two parts and associate target with contextual features separately. Similar to (Tang et al., 2016a), Zhang et al. (2016) develop a three-way gated neural network to model the interaction between the target and its surrounding contexts. Despite the advantages of jointly modeling target and context, they are not capable of capturing long-range information when some critical context information is far from the target. To overcome this limitation, researchers bring in the attention mechanism to model target-context association (Tang et al., 2016a,b; Wang et al., 2016; Yang et al., 2017; Liu and Zhang, 2017; Ma et al., 2017; Chen et al., 2017; Zhang et al., 2017; Tay et al., 2017). Compared with these methods, our TNet avoids using attention for feature extraction so as to alleviate the attended noise. 5 Conclusions We re-examine the drawbacks of attention mechanism for target sentiment classification, and also investigate the obstacles that hinder CNN-based models to perform well for this task. Our TNet model is carefully designed to solve these issues. Specifically, we propose target specific transformation component to better integrate target information into the word representation. Moreover, we employ CNN as the feature extractor for this classification problem, and rely on the contextpreserving and position relevance mechanisms to maintain the advantages of previous LSTM-based models. The performance of TNet consistently dominates previous state-of-the-art methods on different types of data. The ablation studies show the efficacy of its different modules, and thus verify the rationality of TNet’s architecture. References Dzmitry Bahdanau, Kyunghyun Cho, and Yoshua Bengio. 2014. Neural machine translation by jointly 955 learning to align and translate. In Proceedings of ICLR. Yoshua Bengio, Aaron Courville, and Pascal Vincent. 2013. Representation learning: A review and new perspectives. IEEE Transactions on Pattern Analysis and Machine Intelligence, 35(8):1798–1828. Sasha Blair-Goldensohn, Kerry Hannan, Ryan McDonald, Tyler Neylon, George A Reis, and Jeff Reynar. 2008. Building a sentiment summarizer for local service reviews. In WWW workshop on NLP in the information explosion era, pages 339–348. Peng Chen, Zhongqian Sun, Lidong Bing, and Wei Yang. 2017. Recurrent attention network on memory for aspect sentiment analysis. In Proceedings of EMNLP, pages 463–472. Li Dong, Furu Wei, Chuanqi Tan, Duyu Tang, Ming Zhou, and Ke Xu. 2014. Adaptive recursive neural network for target-dependent twitter sentiment classification. In Proceedings of ACL, pages 49–54. Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. 2016a. Deep residual learning for image recognition. In Proceedings of CVPR, pages 770–778. Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. 2016b. Identity mappings in deep residual networks. In Proceedings of ECCV, pages 630–645. Long Jiang, Mo Yu, Ming Zhou, Xiaohua Liu, and Tiejun Zhao. 2011. Target-dependent twitter sentiment classification. In Proceedings of ACL, pages 151–160. Rie Johnson and Tong Zhang. 2015. Semi-supervised convolutional neural networks for text categorization via region embedding. In Proceedings of NIPS, pages 919–927. Rafal Jozefowicz, Wojciech Zaremba, and Ilya Sutskever. 2015. An empirical exploration of recurrent network architectures. In Proceedings of ICML, pages 2342–2350. Yoon Kim. 2014. Convolutional neural networks for sentence classification. In Proceedings of EMNLP, pages 1746–1751. Diederik Kingma and Jimmy Ba. 2015. Adam: A method for stochastic optimization. In Proceedings of ICLR. Svetlana Kiritchenko, Xiaodan Zhu, Colin Cherry, and Saif Mohammad. 2014. Nrc-canada-2014: Detecting aspects and sentiment in customer reviews. In Proceedings of SemEval, pages 437–442. Siwei Lai, Liheng Xu, Kang Liu, and Jun Zhao. 2015. Recurrent convolutional neural networks for text classification. In Proceedings of AAAI, volume 333, pages 2267–2273. Yann LeCun, Yoshua Bengio, and Geoffrey Hinton. 2015. Deep learning. Nature, 521(7553):436. Xin Li and Wai Lam. 2017. Deep multi-task learning for aspect term extraction with memory interaction. In Proceedings of EMNLP, pages 2876–2882. Bing Liu. 2012. Sentiment analysis and opinion mining. Synthesis Lectures on Human Language Technologies, 5(1):1–167. Jiangming Liu and Yue Zhang. 2017. Attention modeling for targeted sentiment. In Proceedings of EACL, pages 572–577. Thang Luong, Hieu Pham, and Christopher D. Manning. 2015. Effective approaches to attentionbased neural machine translation. In Proceedings of EMNLP, pages 1412–1421. Dehong Ma, Sujian Li, Xiaodong Zhang, and Houfeng Wang. 2017. Interactive attention networks for aspect-level sentiment classification. In Proceedings of IJCAI, pages 4068–4074. Jeffrey Pennington, Richard Socher, and Christopher Manning. 2014. Glove: Global vectors for word representation. In Proceedings of EMNLP, pages 1532–1543. Maria Pontiki, Dimitris Galanis, John Pavlopoulos, Harris Papageorgiou, Ion Androutsopoulos, and Suresh Manandhar. 2014. Semeval-2014 task 4: Aspect based sentiment analysis. In Proceedings of SemEval, pages 27–35. Bei Shi, Zihao Fu, Lidong Bing, and Wai Lam. 2018. Learning domain-sensitive and sentimentaware word embeddings. In Proceedings of ACL. Rupesh K Srivastava, Klaus Greff, and J¨urgen Schmidhuber. 2015a. Training very deep networks. In Proceedings of NIPS, pages 2377–2385. Rupesh Kumar Srivastava, Klaus Greff, and J¨urgen Schmidhuber. 2015b. Highway networks. arXiv preprint arXiv:1505.00387. Duyu Tang, Bing Qin, Xiaocheng Feng, and Ting Liu. 2016a. Effective lstms for target-dependent sentiment classification. In Proceedings of COLING, pages 3298–3307. Duyu Tang, Bing Qin, and Ting Liu. 2016b. Aspect level sentiment classification with deep memory network. In Proceedings of EMNLP, pages 214–224. Yi Tay, Anh Tuan Luu, and Siu Cheung Hui. 2017. Learning to attend via word-aspect associative fusion for aspect-based sentiment analysis. arXiv preprint arXiv:1712.05403. Ivan Titov and Ryan McDonald. 2008. Modeling online reviews with multi-grain topic models. In Proceedings of WWW, pages 111–120. Duy-Tin Vo and Yue Zhang. 2015. Target-dependent twitter sentiment classification with rich automatic features. In Proceedings of IJCAI, pages 1347– 1353. 956 Joachim Wagner, Piyush Arora, Santiago Cortes, Utsab Barman, Dasha Bogdanova, Jennifer Foster, and Lamia Tounsi. 2014. Dcu: Aspect-based polarity classification for semeval task 4. In Proceedings of SemEval, pages 223–229. Yequan Wang, Minlie Huang, xiaoyan zhu, and Li Zhao. 2016. Attention-based lstm for aspect-level sentiment classification. In Proceedings of EMNLP, pages 606–615. Kelvin Xu, Jimmy Ba, Ryan Kiros, Kyunghyun Cho, Aaron Courville, Ruslan Salakhudinov, Rich Zemel, and Yoshua Bengio. 2015. Show, attend and tell: Neural image caption generation with visual attention. In Proceedings of ICML, pages 2048–2057. Min Yang, Wenting Tu, Jingxuan Wang, Fei Xu, and Xiaojun Chen. 2017. Attention based lstm for target dependent sentiment classification. In Proceedings of AAAI, pages 5013–5014. Jianxing Yu, Zheng-Jun Zha, Meng Wang, and TatSeng Chua. 2011. Aspect ranking: identifying important product aspects from online consumer reviews. In Proceedings of ACL, pages 1496–1505. Meishan Zhang, Yue Zhang, and Duy-Tin Vo. 2016. Gated neural networks for targeted sentiment analysis. In Proceedings of AAAI, pages 3087–3093. Ye Zhang and Byron Wallace. 2017. A sensitivity analysis of (and practitioners guide to) convolutional neural networks for sentence classification. In Proceedings of IJCNLP, pages 253–263. Yue Zhang, Zhenghua Li, Jun Lang, Qingrong Xia, and Min Zhang. 2017. Dependency parsing with partial annotations: An empirical comparison. In Proceedings of IJCNLP, pages 49–58.
2018
87
Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Long Papers), pages 957–967 Melbourne, Australia, July 15 - 20, 2018. c⃝2018 Association for Computational Linguistics 957 Target-Sensitive Memory Networks for Aspect Sentiment Classification Shuai Wang†, Sahisnu Mazumder†, Bing Liu†, Mianwei Zhou‡, Yi Chang§ †Department of Computer Science, University of Illinois at Chicago, USA ‡Plus.AI, USA §Artificial Intelligence School, Jilin University, China [email protected], [email protected] [email protected], [email protected], [email protected] Abstract Aspect sentiment classification (ASC) is a fundamental task in sentiment analysis. Given an aspect/target and a sentence, the task classifies the sentiment polarity expressed on the target in the sentence. Memory networks (MNs) have been used for this task recently and have achieved state-of-the-art results. In MNs, attention mechanism plays a crucial role in detecting the sentiment context for the given target. However, we found an important problem with the current MNs in performing the ASC task. Simply improving the attention mechanism will not solve it. The problem is referred to as target-sensitive sentiment, which means that the sentiment polarity of the (detected) context is dependent on the given target and it cannot be inferred from the context alone. To tackle this problem, we propose the targetsensitive memory networks (TMNs). Several alternative techniques are designed for the implementation of TMNs and their effectiveness is experimentally evaluated. 1 Introduction Aspect sentiment classification (ASC) is a core problem of sentiment analysis (Liu, 2012). Given an aspect and a sentence containing the aspect, ASC classifies the sentiment polarity expressed in the sentence about the aspect, namely, positive, neutral, or negative. Aspects are also called opinion targets (or simply targets), which are usually product/service features in customer reviews. In this paper, we use aspect and target interchangeably. In practice, aspects can be specified by the user or extracted automatically using an aspect extraction technique (Liu, 2012). In this work, we assume the aspect terms are given and only focus on the classification task. Due to their impressive results in many NLP tasks (Deng et al., 2014), neural networks have been applied to ASC (see the survey (Zhang et al., 2018)). Memory networks (MNs), a type of neural networks which were first proposed for question answering (Weston et al., 2015; Sukhbaatar et al., 2015), have achieved the state-of-the-art results in ASC (Tang et al., 2016). A key factor for their success is the attention mechanism. However, we found that using existing MNs to deal with ASC has an important problem and simply relying on attention modeling cannot solve it. That is, their performance degrades when the sentiment of a context word is sensitive to the given target. Let us consider the following sentences: (1) The screen resolution is excellent but the price is ridiculous. (2) The screen resolution is excellent but the price is high. (3) The price is high. (4) The screen resolution is high. In sentence (1), the sentiment expressed on aspect screen resolution (or resolution for short) is positive, whereas the sentiment on aspect price is negative. For the sake of predicting correct sentiment, a crucial step is to first detect the sentiment context about the given aspect/target. We call this step targeted-context detection. Memory networks (MNs) can deal with this step quite well because the sentiment context of a given aspect can be captured by the internal attention mechanism in MNs. Concretely, in sentence (1) the word “excellent” can be identified as the sentiment context when resolution is specified. Likewise, the context word “ridiculous” will be placed with a high attention when price is the target. With the correct targeted-context detected, a trained MN, which recognizes “excellent” as positive sentiment and “ridiculous” as negative sentiment, will infer correct sentiment polarity for the given target. This 958 is relatively easy as “excellent” and “ridiculous” are both target-independent sentiment words, i.e., the words themselves already indicate clear sentiments. As illustrated above, the attention mechanism addressing the targeted-context detection problem is very useful for ASC, and it helps classify many sentences like sentence (1) accurately. This also led to existing and potential research in improving attention modeling (discussed in Section 5). However, we observed that simply focusing on tackling the target-context detection problem and learning better attention are not sufficient to solve the problem found in sentences (2), (3) and (4). Sentence (2) is similar to sentence (1) except that the (sentiment) context modifying aspect/target price is “high”. In this case, when “high” is assigned the correct attention for the aspect price, the model also needs to capture the sentiment interaction between “high” and price in order to identify the correct sentiment polarity. This is not as easy as sentence (1) because “high” itself indicates no clear sentiment. Instead, its sentiment polarity is dependent on the given target. Looking at sentences (3) and (4), we further see the importance of this problem and also why relying on attention mechanism alone is insufficient. In these two sentences, sentiment contexts are both “high” (i.e., same attention), but sentence (3) is negative and sentence (4) is positive simply because their target aspects are different. Therefore, focusing on improving attention will not help in these cases. We will give a theoretical insight about this problem with MNs in Section 3. In this work, we aim to solve this problem. To distinguish it from the aforementioned targetedcontext detection problem as shown by sentence (1), we refer to the problem in (2), (3) and (4) as the target-sensitive sentiment (or target-dependent sentiment) problem, which means that the sentiment polarity of a detected/attended context word is conditioned on the target and cannot be directly inferred from the context word alone, unlike “excellent” and “ridiculous”. To address this problem, we propose target-sensitive memory networks (TMNs), which can capture the sentiment interaction between targets and contexts. We present several approaches to implementing TMNs and experimentally evaluate their effectiveness. 2 Memory Network for ASC This section describes our basic memory network for ASC, also as a background knowledge. It does not include the proposed target-sensitive sentiment solutions, which are introduced in Section 4. The model design follows previous studies (Sukhbaatar et al., 2015; Tang et al., 2016) except that a different attention alignment function is used (shown in Eq. 1). Their original models will be compared in our experiments as well. The definitions of related notations are given in Table 1. t a target word, t ∈RV ×1 vt target embedding of t, vt ∈Rd×1 xi a context word in a sentence, xi ∈RV ×1 mi, ci input, output context embedding of word xi, and mi, ci ∈Rd×1 V number of words in vocabulary d vector/embedding dimension A input embedding matrix A ∈Rd×V C output embedding matrix C ∈Rd×V α attention distribution in a sentence αi attention of context word i, αi ∈(0, 1) o output representation, o ∈Rd×1 K number of sentiment classes s sentiment score, s ∈RK×1 y sentiment probability Table 1: Definition of Notations Input Representation: Given a target aspect t, an embedding matrix A is used to convert t into a vector representation, vt (vt = At). Similarly, each context word (non-aspect word in a sentence) xi ∈{x1, x2, ...xn} is also projected to the continuous space stored in memory, denoted by mi (mi = Axi) ∈{m1, m2, ...mn}. Here n is the number of words in a sentence and i is the word position/index. Both t and xi are one-hot vectors. For an aspect expression with multiple words, its aspect representation vt is the averaged vector of those words (Tang et al., 2016). Attention: Attention can be obtained based on the above input representation. Specifically, an attention weight αi for the context word xi is computed based on the alignment function: αi = softmax(vT t Mmi) (1) where M ∈Rd×d is the general learning matrix suggested by Luong et al. (2015). In this manner, attention α = {α1, α2, ..αn} is represented as a vector of probabilities, indicating the weight/importance of context words towards a given target. Note that αi ∈(0, 1) and P i αi = 1. 959 Output Representation: Another embedding matrix C is used for generating the individual (output) continuous vector ci (ci = Cxi) for each context word xi. A final response/output vector o is produced by summing over these vectors weighted with the attention α, i.e., o = P i αici. Sentiment Score (or Logit): The aspect sentiment scores (also called logits) for positive, neutral, and negative classes are then calculated, where a sentiment-specific weight matrix W ∈ RK×d is used. The sentiment scores are represented in a vector s ∈RK×1, where K is the number of (sentiment) classes, which is 3 in ASC. s = W(o + vt) (2) The final sentiment probability y is produced with a softmax operation, i.e., y = softmax(s). 3 Problem of the above Model for Target-Sensitive Sentiment This section analyzes the problem of targetsensitive sentiment in the above model. The analysis can be generalized to many existing MNs as long as their improvements are on attention α only. We first expand the sentiment score calculation from Eq. 2 to its individual terms: s = W(o + vt) = W( X i αici + vt) = α1Wc1 + α2Wc2 + ...αnWcn + Wvt (3) where “+” denotes element-wise summation. In Eq. 3, αiWci can be viewed as the individual sentiment logit for a context word and Wvt is the sentiment logit of an aspect. They are linearly combined to determine the final sentiment score s. This can be problematic in ASC. First, an aspect word often expresses no sentiment, for example, “screen”. However, if the aspect term vt is simply removed from Eq. 3, it also causes the problem that the model cannot handle target-dependent sentiment. For instance, the sentences (3) and (4) in Section 1 will then be treated as identical if their aspect words are not considered. Second, if an aspect word is considered and it directly bears some positive or negative sentiment, then when an aspect word occurs with different context words for expressing opposite sentiments, a contradiction can be resulted from them, especially in the case that the context word is a target-sensitive sentiment word. We explain it as follows. Let us say we have two target words price and resolution (denoted as p and r). We also have two possible context words “high” and “low” (denoted as h and l). As these two sentiment words can modify both aspects, we can construct four snippets “high price”, “low price”, “high resolution” and “low resolution”. Their sentiments are negative, positive, positive, and negative respectively. Let us set W to R1×d so that s becomes a 1-dimensional sentiment score indicator. s > 0 indicates a positive sentiment and s < 0 indicates a negative sentiment. Based on the above example snippets or phrases we have four corresponding inequalities: (a) W(αhch + vp) < 0, (b) W(αlcl + vp) > 0, (c) W(αhch + vr) > 0 and (d) W(αlcl + vr) < 0. We can drop all α terms here as they all equal to 1, i.e., they are the only context word in the snippets to attend to (the target words are not contexts). From (a) and (b) we can infer (e) Wch < −Wvp < Wcl. From (c) and (d) we can infer (f) Wcl < −Wvr < Wch. From (e) and (f) we have (g) Wch < Wcl < Wch, which is a contradiction. This contradiction means that MNs cannot learn a set of parameters W and C to correctly classify the above four snippets/sentences at the same time. This contradiction also generalizes to realworld sentences. That is, although real-world review sentences are usually longer and contain more words, since the attention mechanism makes MNs focus on the most important sentiment context (the context with high αi scores), the problem is essentially the same. For example, in sentences (2) and (3) in Section 1, when price is targeted, the main attention will be placed on “high”. For MNs, these situations are nearly the same as that for classifying the snippet “high price”. We will also show real examples in the experiment section. One may then ask whether improving attention can help address the problem, as αi can affect the final results by adjusting the sentiment effect of the context word via αiWci. This is unlikely, if not impossible. First, notice that αi is a scalar ranging in (0,1), which means it essentially assigns higher or lower weight to increase or decrease the sentiment effect of a context word. It cannot change the intrinsic sentiment orientation/polarity of the context, which is determined by Wci. For example, if Wci assigns the context word “high” a positive sentiment (Wci > 0), αi will not make it negative (i.e., αiWci < 0 cannot be achieved by chang960 ing αi). Second, other irrelevant/unimportant context words often carry no or little sentiment information, so increasing or decreasing their weights does not help. For example, in the sentence “the price is high”, adjusting the weights of context words “the” and “is” will neither help solve the problem nor be intuitive to do so. 4 The Proposed Approaches This section introduces six (6) alternative targetsensitive memory networks (TMNs), which all can deal with the target-sensitive sentiment problem. Each of them has its characteristics. Non-linear Projection (NP): This is the first approach that utilizes a non-linear projection to capture the interplay between an aspect and its context. Instead of directly following the common linear combination as shown in Eq. 3, we use a non-linear projection (tanh) as the replacement to calculate the aspect-specific sentiment score. s = W · tanh( X i αici + vt) (4) As shown in Eq. 4, by applying a non-linear projection over attention-weighted ci and vt, the context and aspect information are coupled in a way that the final sentiment score cannot be obtained by simply summing their individual contributions (compared with Eq. 3). This technique is also intuitive in neural networks. However, notice that by using the non-linear projection (or adding more sophisticated hidden layers) over them in this way, we sacrifice some interpretability. For example, we may have difficulty in tracking how each individual context word (ci) affects the final sentiment score s, as all context and target representations are coupled. To avoid this, we can use the following five alternative techniques. Contextual Non-linear Projection (CNP): Despite the fact that it also uses the non-linear projection, this approach incorporates the interplay between a context word and the given target into its (output) context representation. We thus name it Contextual Non-linear Projection (CNP). s = W X i αi · tanh(ci + vt) (5) From Eq. 5, we can see that this approach can keep the linearity of attention-weighted context aggregation while taking into account the aspect information with non-linear projection, which works in a different way compared to NP. If we define ˜ci = tanh(ci + vt), ˜ci can be viewed as the target-aware context representation of context xi and the final sentiment score is calculated based on the aggregation of such ˜ci. This could be a more reasonable way to carry the aspect information rather than simply summing the aspect representation (Eq. 3). However, one potential disadvantage is that this setting uses the same set of vector representations (learned by embeddings C) for multiple purposes, i.e., to learn output (context) representations and to capture the interplay between contexts and aspects. This may degenerate its model performance when the computational layers in memory networks (called “hops”) are deep, because too much information is required to be encoded in such cases and a sole set of vectors may fail to capture all of it. To overcome this, we suggest the involvement of an additional new set of embeddings/vectors, which is exclusively designed for modeling the sentiment interaction between an aspect and its context. The key idea is to decouple different functioning components with different representations, but still make them work jointly. The following four techniques are based on this idea. Interaction Term (IT): The third approach is to formulate explicit target-context sentiment interaction terms. Different from the targeted-context detection problem which is captured by attention (discussed in Section 1), here the targetcontext sentiment (TCS) interaction measures the sentiment-oriented interaction effect between targets and contexts, which we refer to as TCS interaction (or sentiment interaction) for short in the rest of this paper. Such sentiment interaction is captured by a new set of vectors, and we thus also call such vectors TCS vectors. s = X i αi(Wsci + wI⟨di, dt⟩) (6) In Eq. 6, Ws ∈RK×d and wI ∈RK×1 are used instead of W in Eq. 3. Ws models the direct sentiment effect from ci while wI works with di and dt together for learning the TCS interaction. di and dt are TCS vector representations of context xi and aspect t, produced from a new embedding matrix D, i.e., di = Dxi, dt = Dt (D ∈Rd×V and di, dt ∈Rd×1). Unlike input and output embeddings A and C, D is designed to capture the sentiment interac961 tion. The vectors from D affect the final sentiment score through wI⟨di, dt⟩, where wI is a sentimentspecific vector and ⟨di, dt⟩∈R denotes the dot product of the two TCS vectors di and dt. Compared to the basic MNs, this model can better capture target-sensitive sentiment because the interactions between a context word h and different aspect words (say, p and r) can be different, i.e., ⟨dh, dp⟩̸= ⟨dh, dr⟩. The key advantage is that now the sentiment effect is explicitly dependent on its target and context. For example, ⟨dh, dp⟩can help shift the final sentiment to negative and ⟨dh, dr⟩can help shift it to positive. Note that α is still needed to control the importance of different contexts. In this manner, targeted-context detection (attention) and TCS interaction are jointly modeled and work together for sentiment inference. The proposed techniques introduced below also follow this core idea but with different implementations or properties. We thus will not repeat similar discussions. Coupled Interaction (CI): This proposed technique associates the TCS interaction with an additional set of context representation. This representation is for capturing the global correlation between context and different sentiment classes. s = X i αi(Wsci + WI⟨di, dt⟩ei) (7) Specifically, ei is another output representation for xi, which is coupled with the sentiment interaction factor ⟨di, dt⟩. For each context word xi, ei is generated as ei = Exi where E ∈Rd×V is an embedding matrix. ⟨di, dt⟩and ei function together as a target-sensitive context vector and are used to produce sentiment scores with WI (WI ∈RK×d). Joint Coupled Interaction (JCI): A natural variant of the above model is to replace ei with ci, which means to learn a joint output representation. This can also reduce the number of learning parameters and simplify the CI model. s = X i αi(Wsci + WI⟨di, dt⟩ci) (8) Joint Projected Interaction (JPI): This model also employs a unified output representation like JCI, but a context output vector ci will be projected to two different continuous spaces before sentiment score calculation. To achieve the goal, two projection matrices W1, W2 and the non-linear projection function tanh are used. The intuition is that, when we want to reduce the (embedding) parameters and still learn a joint representation, two different sentiment effects need to be separated in different vector spaces. The two sentiment effects are modeled as two terms: s = X i αiWJ tanh(W1ci) + X i αiWJ⟨di, dt⟩tanh(W2ci) (9) where the first term can be viewed as learning target-independent sentiment effect while the second term captures the TCS interaction. A joint sentiment-specific weight matrix WJ(WJ ∈ RK×d) is used to control/balance the interplay between these two effects. Discussions: (a) In IT, CI, JCI, and JPI, their first-order terms are still needed, because not in all cases sentiment inference needs TCS interaction. For some simple examples like “the battery is good”, the context word “good” simply indicates clear sentiment, which can be captured by their first-order term. However, notice that the modeling of second-order terms offers additional help in both general and target-sensitive scenarios. (b) TCS interaction can be calculated by other modeling functions. We have tried several methods and found that using the dot product ⟨di, dt⟩or dT i Wdt (with a projection matrix W) generally produces good results. (c) One may ask whether we can use fewer embeddings or just use one universal embedding to replace A, C and D (the definition of D can be found in the introduction of IT). We have investigated them as well. We found that merging A and C is basically workable. But merging D and A/C produces poor results because they essentially function with different purposes. While A and C handle targeted-context detection (attention), D captures the TCS interaction. (d) Except NP, we do not apply non-linear projection to the sentiment score layer. Although adding non-linear transformation to it may further improve model performance, the individual sentiment effect from each context will become untraceable, i.e., losing some interpretability. In order to show the effectiveness of learning TCS interaction and for analysis purpose, we do not use it in this work. But it can be flexibly added for specific tasks/analyses that do not require strong interpretability. Loss function: The proposed models are all trained in an end-to-end manner by minimizing the cross entropy loss. Let us denote a sentence and a 962 target aspect as x and t respectively. They appear together in a pair format (x, t) as input and all such pairs construct the dataset H. g(x,t) is a one-hot vector and gk (x,t) ∈{0, 1} denotes a gold sentiment label, i.e., whether (x, t) shows sentiment k. yx,t is the model-predicted sentiment distribution for (x, t). yk x,t denotes its probability in class k. Based on them, the training loss is constructed as: loss = − X (x,t)∈H X k∈K gk (x,t) log yk (x,t) (10) 5 Related Work Aspect sentiment classification (ASC) (Hu and Liu, 2004), which is different from document or sentence level sentiment classification (Pang et al., 2002; Kim, 2014; Yang et al., 2016), has recently been tackled by neural networks with promising results (Dong et al., 2014; Nguyen and Shirai, 2015) (also see the survey (Zhang et al., 2018)). Later on, the seminal work of using attention mechanism for neural machine translation (Bahdanau et al., 2015) popularized the application of the attention mechanism in many NLP tasks (Hermann et al., 2015; Cho et al., 2015; Luong et al., 2015), including ASC. Memory networks (MNs) (Weston et al., 2015; Sukhbaatar et al., 2015) are a type of neural models that involve such attention mechanisms (Bahdanau et al., 2015), and they can be applied to ASC. Tang et al. (2016) proposed an MN variant to ASC and achieved the state-of-the-art performance. Another common neural model using attention mechanism is the RNN/LSTM (Wang et al., 2016). As discussed in Section 1, the attention mechanism is suitable for ASC because it effectively addresses the targeted-context detection problem. Along this direction, researchers have studied more sophisticated attentions to further help the ASC task (Chen et al., 2017; Ma et al., 2017; Liu and Zhang, 2017). Chen et al. (2017) proposed to use a recurrent attention mechanism. Ma et al. (2017) used multiple sets of attentions, one for modeling the attention of aspect words and one for modeling the attention of context words. Liu and Zhang (2017) also used multiple sets of attentions, one obtained from the left context and one obtained from the right context of a given target. Notice that our work does not lie in this direction. Our goal is to solve the target-sensitive sentiment and to capture the TCS interaction, which is a different problem. This direction is also finergrained, and none of the above works addresses this problem. Certainly, both directions can improve the ASC task. We will also show in our experiments that our work can be integrated with an improved attention mechanism. To the best of our knowledge, none of the existing studies addresses the target-sensitive sentiment problem in ASC under the purely data-driven and supervised learning setting. Other concepts like sentiment shifter (Polanyi and Zaenen, 2006) and sentiment composition (Moilanen and Pulman, 2007; Choi and Cardie, 2008; Socher et al., 2013) are also related, but they are not learned automatically and require rule/patterns or external resources (Liu, 2012). Note that our approaches do not rely on handcrafted patterns (Ding et al., 2008; Wu and Wen, 2010), manually compiled sentiment constraints and review ratings (Lu et al., 2011), or parse trees (Socher et al., 2013). 6 Experiments We perform experiments on the datasets of SemEval Task 2014 (Pontiki et al., 2014), which contain online reviews from domain Laptop and Restaurant. In these datasets, aspect sentiment polarities are labeled. The training and test sets have also been provided. Full statistics of the datasets are given in Table 2. Dataset Positive Neutral Negative Train Test Train Test Train Test Restaurant 2164 728 637 196 807 196 Laptop 994 341 464 169 870 128 Table 2: Statistics of Datasets 6.1 Candidate Models for Comparison MN: The classic end-to-end memory network (Sukhbaatar et al., 2015). AMN: A state-of-the-art memory network used for ASC (Tang et al., 2016). The main difference from MN is in its attention alignment function, which concatenates the distributed representations of the context and aspect, and uses an additional weight matrix for attention calculation, following the method introduced in (Bahdanau et al., 2015). BL-MN: Our basic memory network presented in Section 2, which does not use the proposed techniques for capturing target-sensitive sentiments. AE-LSTM: RNN/LSTM is another popular attention based neural model. Here we compare 963 with a state-of-the-art attention-based LSTM for ASC, AE-LSTM (Wang et al., 2016). ATAE-LSTM: Another attention-based LSTM for ASC reported in (Wang et al., 2016). Target-sensitive Memory Networks (TMNs): The six proposed techniques, NP, CNP, IT, CI, JCI, and JPI give six target-sensitive memory networks. Note that other non-neural network based models like SVM and neural models without attention mechanism like traditional LSTMs have been compared and reported with inferior performance in the ASC task (Dong et al., 2014; Tang et al., 2016; Wang et al., 2016), so they are excluded from comparisons here. Also, note that non-neural models like SVMs require feature engineering to manually encode aspect information, while this work aims to improve the aspect representation learning based approaches. 6.2 Evaluation Measure Since we have a three-class classification task (positive, negative and neutral) and the classes are imbalanced as shown in Table 2, we use F1-score as our evaluation measure. We report both F1Macro over all classes and all individual classbased F1 scores. As our problem requires finegrained sentiment interaction, the class-based F1 provides more indicative information. In addition, we report the accuracy (same as F1-Micro), as it is used in previous studies. However, we suggest using F1-score because accuracy biases towards the majority class. 6.3 Training Details We use the open-domain word embeddings1 for the initialization of word vectors. We initialize other model parameters from a uniform distribution U(-0.05, 0.05). The dimension of the word embedding and the size of the hidden layers are 300. The learning rate is set to 0.01 and the dropout rate is set to 0.1. Stochastic gradient descent is used as our optimizer. The position encoding is also used (Tang et al., 2016). We also compare the memory networks in their multiple computational layers version (i.e., multiple hops) and the number of hops is set to 3 as used in the mentioned previous studies. We implemented all models in the TensorFlow environment using same input, embedding size, dropout rate, optimizer, etc. 1https://github.com/mmihaltz/word2vec-GoogleNewsvectors so as to test our hypotheses, i.e., to make sure the achieved improvements do not come from elsewhere. Meanwhile, we can also report all evaluation measures discussed above2. 10% of the training data is used as the development set. We report the best results for all models based on their F-1 Macro scores. 6.3.1 Result Analysis The classification results are shown in Table 3. Note that the candidate models are all based on classic/standard attention mechanism, i.e., without sophisticated or multiple attentions involved. We compare the 1-hop and 3-hop memory networks as two different settings. The top three F1-Macro scores are marked in bold. Based on them, we have the following observations: 1. Comparing the 1-hop memory networks (first nine rows), we see significant performance gains achieved by CNP, CI, JCI, and JPI on both datasets, where each of them has p < 0.01 over the strongest baseline (BL-MN) from paired t-test using F1-Macro. IT also outperforms the other baselines while NP has similar performance to BL-MN. This indicates that TCS interaction is very useful, as BL-MN and NP do not model it. 2. In the 3-hop setting, TMNs achieve much better results on Restaurant. JCI, IT, and CI achieve the best scores, outperforming the strongest baseline AMN by 2.38%, 2.18%, and 2.03%. On Laptop, BL-MN and most TMNs (except CNP and JPI) perform similarly. However, BL-MN performs poorly on Restaurant (only better than two models) while TMNs show more stable performance. 3. Comparing all TMNs, we see that JCI works the best as it always obtains the top-three scores on two datasets and in two settings. CI and JPI also perform well in most cases. IT, NP, and CNP can achieve very good scores in some cases but are less stable. We also analyzed their potential issues in Section 4. 4. It is important to note that these improvements are quite large because in many cases sentiment interactions may not be necessary (like sentence (1) in Section 1). The overall good results obtained by TMNs demonstrate their capability of handling both general and target-sensitive sentiments, i.e., the proposed 2Most related studies report accuracy only. 964 Restaurant Laptop Model Macro Neg. Neu. Pos. Micro Model Macro Neg. Neu. Pos. Micro MN 58.91 57.07 36.81 82.86 71.52 MN 56.16 47.06 45.81 75.63 61.91 AMN 63.82 61.76 43.56 86.15 75.68 AMN 60.01 52.67 47.89 79.48 66.14 BL-MN 64.34 61.96 45.86 85.19 75.30 BL-MN 62.89 57.16 49.51 81.99 68.90 NP 64.62 64.89 43.21 85.78 75.93 NP 62.63 56.43 49.62 81.83 68.65 CNP 65.58 62.97 47.65 86.12 75.97 CNP 64.38 57.92 53.23 81.98 69.62 IT 65.37 65.22 44.44 86.46 76.98 IT 63.07 57.01 50.62 81.58 68.38 CI 66.78 65.49 48.32 86.51 76.96 CI 63.65 57.33 52.60 81.02 68.65 JCI 66.21 65.74 46.23 86.65 77.16 JCI 64.19 58.49 53.69 80.40 68.42 JPI 66.58 65.44 47.60 86.71 76.96 JPI 64.53 58.62 51.71 83.25 70.06 AE-LSTM 66.45 64.22 49.40 85.73 76.43 AE-LSTM 62.45 55.26 50.35 81.74 68.50 ATAE-LSTM 65.41 66.19 43.34 86.71 76.61 ATAE-LSTM 59.41 55.27 42.15 80.81 67.40 MN (hops) 62.68 60.35 44.57 83.11 72.86 MN (hops) 60.61 55.59 45.94 80.29 66.61 AMN (hops) 66.46 65.57 46.64 87.16 77.27 AMN (hops) 65.16 60.00 52.56 82.91 70.38 BL-MN (hops) 65.71 63.83 46.91 86.39 76.45 BL-MN (hops) 67.11 63.10 54.53 83.69 72.15 NP (hops) 65.98 64.18 47.86 85.90 75.73 NP (hops) 67.79 63.17 56.27 83.92 72.43 CNP (hops) 66.87 65.32 49.07 86.22 76.65 CNP (hops) 64.85 58.84 53.29 82.43 70.25 IT (hops) 68.64 67.11 51.47 87.33 78.55 IT (hops) 66.23 61.43 53.69 83.57 71.37 CI (hops) 68.49 64.83 53.03 87.60 78.69 CI (hops) 66.79 61.80 55.30 83.26 71.67 JCI (hops) 68.84 66.28 52.06 88.19 78.79 JCI (hops) 67.23 61.08 57.49 83.11 71.79 JPI (hops) 67.86 66.72 49.63 87.24 77.95 JPI (hops) 65.16 59.01 54.25 82.20 70.18 Table 3: Results of all models on two datasets. Top three F1-Macro scores are marked in bold. The first nine models are 1-hop memory networks. The last nine models are 3-hop memory networks. techniques do not bring harm while capturing additional target-sensitive signals. 5. Micro-F1/accuracy is greatly affected by the majority class, as we can see the scores from Pos. and Micro are very consistent. TMNs, in fact, effectively improve the minority classes, which are reflected in Neg. and Neu., for example, JCI improves BL-MN by 3.78% in Neg. on Restaurant. This indicates their usefulness of capturing fine-grained sentiment signals. We will give qualitative examples in next section to show their modeling superiority for identifying target-sensitive sentiments. Restaurant Model Macro Neg. Neu. Pos. Micro TRMN 69.00 68.66 50.66 87.70 78.86 RMN 67.48 66.48 49.11 86.85 77.14 Laptop Model Macro Neg. Neu. Pos. Micro TRMN 68.18 62.63 57.37 84.30 72.92 RMN 67.17 62.65 55.31 83.55 72.07 Table 4: Results with Recurrent Attention Integration with Improved Attention: As discussed, the goal of this work is not for learning better attention but addressing the targetsensitive sentiment. In fact, solely improving attention does not solve our problem (see Sections 1 and 3). However, better attention can certainly help achieve an overall better performance for the ASC task, as it makes the targeted-context detection more accurate. Here we integrate our proposed technique JCI with a state-of-the-art sophisticated attention mechanism, namely, the recurrent attention framework, which involves multiple attentions learned iteratively (Kumar et al., 2016; Chen et al., 2017). We name our model with this integration as Target-sensitive Recurrent-attention Memory Network (TRMN) and the basic memory network with the recurrent attention as Recurrentattention Memory Network (RMN). Their results are given in Table 4. TRMN achieves significant performance gain with p < 0.05 in paired t-test. 6.4 Effect of TCS Interaction for Identifying Target-Sensitive Sentiment We now give some real examples to show the effectiveness of modeling TCS interaction for identifying target-sensitive sentiments, by comparing a regular MN and a TMN. Specifically, BL-MN and JPI are used. Other MNs/TMNs have similar performances to BL-MN/JPI qualitatively, so we do not list all of them here. For BL-MN and JPI, their sentiment scores of a single context word i are calculated by αiWci (from Eq. 3) and αiWJtanh(W1ci) + αiWJ⟨di, dt⟩tanh(W2ci) (from Eq. 9), each of which results in a 3-dimensional vector. Illustrative Examples: Table 5 shows two records in Laptop. In record 1, to identify the sentiment of target price in the presented sentence, the sentiment interaction between the context word “higher” and the target word price is the key. The 965 Record 1 Record 2 Sentence Price was higher when purchased on MAC.. Sentence (MacBook) Air has higher resolution.. Target Price Sentiment Negative Target Resolution Sentiment Positive Result Sentiment Logits on context “higher” Result Sentiment Logits on context “higher” TMN Negative Neutral Positive TMN Negative Neutral Positive 0.2663 (Correct) -0.2604 -0.0282 -0.4729 -0.3949 0.9041 (Correct) MN Negative Neutral Positive MN Negative Neutral Positive 0.3641 (Correct) -0.3275 -0.0750 0.2562 (Wrong) -0.2305 - 0.0528 Table 5: Sample Records and Model Comparison between MN and TMN specific sentiment scores of the word “higher” towards negative, neutral and positive classes in both models are reported. We can see both models accurately assign the highest sentiment scores to the negative class. We also observe that in MN the negative score (0.3641) in the 3-dimension vector {0.3641, −0.3275, −0.0750} calculated by αiWci is greater than neutral (−0.3275) and positive (−0.0750) scores. Notice that αi is always positive (ranging in (0, 1)), so it can be inferred that the first value in vector Wci is greater than the other two values. Here ci denotes the vector representation of “higher” so we use chigher to highlight it and we have {Wchigher}Negative > {Wchigher}Neutral/Positive as an inference. In record 2, the target is resolution and its sentiment is positive in the presented sentence. Although we have the same context word “higher”, different from record 1, it requires a positive sentiment interaction with the current target. Looking at the results, we see TMN assigns the highest sentiment score of word “higher” to positive class correctly, whereas MN assigns it to negative class. This error is expected if we consider the above inference {Wchigher}Negative > {Wchigher}Neutral/Positive in MN. The cause of this unavoidable error is that Wci is not conditioned on the target. In contrast, WJ⟨di, ·dt⟩tanh(W2ci) can change the sentiment polarity with the aspect vector dt encoded. Other TMNs also achieve it (like WI⟨di, dt⟩ci in JCI). One may notice that the aspect information (vt) is actually also considered in the form of αiWci + Wvt in MNs and wonder whether Wvt may help address the problem given different vt. Let us assume it helps, which means in the above example an MN makes Wvresolution favor the positive class and Wvprice favor the negative class. But then we will have trouble when the context word is “lower”, where it requires Wvresolution to favor the negative class and Wvprice to favor the positive class. This contradiction reflects the theoretical problem discussed in Section 3. Other Examples: We also found other interesting target-sensitive sentiment expressions like “large bill” and “large portion”, “small tip” and “small portion” from Restaurant. Notice that TMNs can also improve the neutral sentiment (see Table 3). For instance, TMN generates a sentiment score vector of the context “over” for target aspect price: {0.1373, 0.0066, -0.1433} (negative) and for target aspect dinner: {0.0496, 0.0591, 0.1128} (neutral) accurately. But MN produces both negative scores {0.0069, 0.0025, -0.0090} (negative) and {0.0078, 0.0028, -0.0102} (negative) for the two different targets. The latter one in MN is incorrect. 7 Conclusion and Future Work In this paper, we first introduced the targetsensitive sentiment problem in ASC. After that, we discussed the basic memory network for ASC and analyzed the reason why it is incapable of capturing such sentiment from a theoretical perspective. We then presented six techniques to construct target-sensitive memory networks. Finally, we reported the experimental results quantitatively and qualitatively to show their effectiveness. Since ASC is a fine-grained and complex task, there are many other directions that can be further explored, like handling sentiment negation, better embedding for multi-word phrase, analyzing sentiment composition, and learning better attention. We believe all these can help improve the ASC task. The work presented in this paper lies in the direction of addressing target-sensitive sentiment, and we have demonstrated the usefulness of capturing this signal. We believe that there will be more effective solutions coming in the near future. Acknowledgments This work was partially supported by National Science Foundation (NSF) under grant nos. IIS1407927 and IIS-1650900, and by Huawei Technologies Co. Ltd with a research gift. 966 References Dzmitry Bahdanau, Kyunghyun Cho, and Yoshua Bengio. 2015. Neural machine translation by jointly learning to align and translate. In International Conference on Learning Representations. Peng Chen, Zhongqian Sun, Lidong Bing, and Wei Yang. 2017. Recurrent attention network on memory for aspect sentiment analysis. In Empirical Methods in Natural Language Processing, pages 452–461. Kyunghyun Cho, Aaron Courville, and Yoshua Bengio. 2015. Describing multimedia content using attention-based encoder-decoder networks. In IEEE Transactions on Multimedia, pages 1875– 1886. IEEE. Yejin Choi and Claire Cardie. 2008. Learning with compositional semantics as structural inference for subsentential sentiment analysis. In Empirical Methods in Natural Language Processing, pages 793–801. Li Deng, Dong Yu, et al. 2014. Deep learning: methods and applications. In Foundations and Trends R⃝in Signal Processing, pages 197–387. Now Publishers, Inc. Xiaowen Ding, Bing Liu, and Philip S Yu. 2008. A holistic lexicon-based approach to opinion mining. In ACM International Conference on Web Search and Data Mining, pages 231–240. Li Dong, Furu Wei, Chuanqi Tan, Duyu Tang, Ming Zhou, and Ke Xu. 2014. Adaptive recursive neural network for target-dependent twitter sentiment classification. In Annual Meeting of the Association for Computational Linguistics, pages 49–54. Karl Moritz Hermann, Tomas Kocisky, Edward Grefenstette, Lasse Espeholt, Will Kay, Mustafa Suleyman, and Phil Blunsom. 2015. Teaching machines to read and comprehend. In Advances in Neural Information Processing Systems, pages 1693– 1701. Minqing Hu and Bing Liu. 2004. Mining and summarizing customer reviews. In ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, pages 168–177. Yoon Kim. 2014. Convolutional neural networks for sentence classification. In Empirical Methods in Natural Language Processing, pages 1746–1751. Ankit Kumar, Ozan Irsoy, Peter Ondruska, Mohit Iyyer, James Bradbury, Ishaan Gulrajani, Victor Zhong, Romain Paulus, and Richard Socher. 2016. Ask me anything: Dynamic memory networks for natural language processing. In International Conference on Machine Learning, pages 1378–1387. Bing Liu. 2012. Sentiment analysis and opinion mining. In Synthesis lectures on human language technologies. Morgan & Claypool Publishers. Jiangming Liu and Yue Zhang. 2017. Attention modeling for targeted sentiment. In European Chapter of the Association for Computational Linguistics, pages 572–577. Yue Lu, Malu Castellanos, Umeshwar Dayal, and ChengXiang Zhai. 2011. Automatic construction of a context-aware sentiment lexicon: an optimization approach. In ACM International Conference on World Wide Web, pages 347–356. Thang Luong, Hieu Pham, and Christopher D Manning. 2015. Effective approaches to attention-based neural machine translation. In Empirical Methods in Natural Language Processing, pages 1412–1421. Dehong Ma, Sujian Li, Xiaodong Zhang, and Houfeng Wang. 2017. Interactive attention networks for aspect-level sentiment classification. In International Joint Conference on Artificial Intelligence, pages 4068–4074. Karo Moilanen and Stephen Pulman. 2007. Sentiment composition. In RANLP, pages 378–382. Thien Hai Nguyen and Kiyoaki Shirai. 2015. Phrasernn: Phrase recursive neural network for aspect-based sentiment analysis. In Empirical Methods in Natural Language Processing, pages 2509–2514. Bo Pang, Lillian Lee, and Shivakumar Vaithyanathan. 2002. Thumbs up?: sentiment classification using machine learning techniques. In Empirical Methods in Natural Language Processing, pages 79–86. Livia Polanyi and Annie Zaenen. 2006. Contextual valence shifters. In Computing attitude and affect in text: Theory and applications, pages 1–10. Springer. Maria Pontiki, Dimitris Galanis, John Pavlopoulos, Haris Papageorgiou, Ion Androutsopoulos, and Suresh Manandhar. 2014. Semeval-2014 task4: Aspect based sentiment analysis. In ProWorkshop on Semantic Evaluation (SemEval-2014). Association for Computational Linguistics. Richard Socher, Alex Perelygin, Jean Wu, Jason Chuang, Christopher D Manning, Andrew Ng, and Christopher Potts. 2013. Recursive deep models for semantic compositionality over a sentiment treebank. In Empirical Methods in Natural Language Processing, pages 1631–1642. Sainbayar Sukhbaatar, Jason Weston, Rob Fergus, et al. 2015. End-to-end memory networks. In Advances in neural information processing systems, pages 2440–2448. Duyu Tang, Bing Qin, and Ting Liu. 2016. Aspect level sentiment classification with deep memory network. In Empirical Methods in Natural Language Processing, pages 214–224. 967 Yequan Wang, Minlie Huang, Li Zhao, et al. 2016. Attention-based lstm for aspect-level sentiment classification. In Empirical Methods in Natural Language Processing, pages 606–615. Jason Weston, Sumit Chopra, and Antoine Bordes. 2015. Memory networks. In International Conference on Learning Representations. Yunfang Wu and Miaomiao Wen. 2010. Disambiguating dynamic sentiment ambiguous adjectives. In International Conference on Computational Linguistics, pages 1191–1199. Zichao Yang, Diyi Yang, Chris Dyer, Xiaodong He, Alex Smola, and Eduard Hovy. 2016. Hierarchical attention networks for document classification. In North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 1480–1489. Lei Zhang, Shuai Wang, and Bing Liu. 2018. Deep learning for sentiment analysis: A survey. In Wiley Interdisciplinary Reviews: Data Mining and Knowledge Discovery, page e1253. Wiley Online Library.
2018
88
Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Long Papers), pages 968–978 Melbourne, Australia, July 15 - 20, 2018. c⃝2018 Association for Computational Linguistics 968 Identifying Transferable Information Across Domains for Cross-domain Sentiment Classification Raksha Sharma1, Pushpak Bhattacharyya1, Sandipan Dandapat2 and Himanshu Sharad Bhatt3 1Department of Computer Science, Indian Institute of Technology Bombay 2Microsoft AI and Research, India 3American Express Big Data Labs, India 1{raksha,pb}@cse.iitb.ac.in [email protected] [email protected] Abstract Getting manually labeled data in each domain is always an expensive and a time consuming task. Cross-domain sentiment analysis has emerged as a demanding concept where a labeled source domain facilitates a sentiment classifier for an unlabeled target domain. However, polarity orientation (positive or negative) and the significance of a word to express an opinion often differ from one domain to another domain. Owing to these differences, crossdomain sentiment classification is still a challenging task. In this paper, we propose that words that do not change their polarity and significance represent the transferable (usable) information across domains for cross-domain sentiment classification. We present a novel approach based on χ2 test and cosine-similarity between context vector of words to identify polarity preserving significant words across domains. Furthermore, we show that a weighted ensemble of the classifiers enhances the cross-domain classification performance. 1 Introduction The choice of the words to express an opinion depends on the domain as users often use domainspecific words (Qiu et al., 2009; Sharma and Bhattacharyya, 2015). For example, entertaining and boring are frequently used in the movie domain to express an opinion; however, finding these words in the electronics domain is rare. Moreover, there are words which are likely to be used across domains in the same proportion, but may change their polarity orientation from one domain to another (Choi et al., 2009). For example, a word like unpredictable is positive in the movie domain (unpredictable plot), but negative in the automobile domain (unpredictable steering). Such a polarity changing word should be assigned positive orientation in the movie domain and negative orientation in the automobile domain.1 Due to these differences across domains, a supervised algorithm trained on a labeled source domain, does not generalize well on an unlabeled target domain and the cross-domain performance degrades. Generally, supervised learning algorithms have to be re-trained from scratch on every new domain using the manually annotated review corpus (Pang et al., 2002; Kanayama and Nasukawa, 2006; Pang and Lee, 2008; Esuli and Sebastiani, 2005; Breck et al., 2007; Li et al., 2009; Prabowo and Thelwall, 2009; Taboada et al., 2011; Cambria et al., 2013; Rosenthal et al., 2014). This is not practical as there are numerous domains and getting manually annotated data for every new domain is an expensive and time consuming task (Bhattacharyya, 2015). On the other hand, domain adaptation techniques work in contrast to traditional supervised techniques on the principle of transferring learned knowledge across domains (Blitzer et al., 2007; Pan et al., 2010; Bhatt et al., 2015). The existing transfer learning based domain adaptation algorithms for cross-domain classification have generally been proven useful in reducing the labeled data requirement, but they do not consider words like unpredictable that change polarity orientation across domains. Transfer (reuse) of changing polarity words affects the cross-domain performance negatively. Therefore, one cannot use transfer learning as the proverbial hammer, rather one needs to gauge what to transfer from the source domain to the target domain. In this paper, we propose that the words which 1The word ‘unpredictable’ is a classic example of changing (inconsistent) polarity across domains (Turney, 2002; Fahrni and Klenner, 2008). 969 are equally significant with a consistent polarity across domains represent the usable information for cross-domain sentiment analysis. χ2 is a popularly used and reliable statistical test to identify significance and polarity of a word in an annotated corpus (Oakes et al., 2001; Al-Harbi et al., 2008; Cheng and Zhulyn, 2012; Sharma and Bhattacharyya, 2013). However, for an unlabeled corpus no such statistical technique is applicable. Therefore, identification of words which are significant with a consistent polarity across domains is a non-trivial task. In this paper, we present a novel technique based on χ2 test and cosine-similarity between context vector of words to identify Significant Consistent Polarity (SCP) words across domains.2 The major contribution of this research is as follows. 1. Extracting significant consistent polarity words across domains: A technique which exploits cosine-similarity between context vector of words and χ2 test is used to identify SCP words across labeled source and unlabeled target domains. 2. An ensemble-based adaptation algorithm: A classifier (Cs) trained on SCP words in the labeled source domain acts as a seed to initiate a classifier (Ct) on the target specific features. These classifiers are then combined in a weighted ensemble to further enhance the cross-domain classification performance. Our results show that our approach gives a statistically significant improvement over Structured Correspondence Learning (SCL) (Bhatt et al., 2015) and common unigrams in identification of transferable words, which eventually facilitates a more accurate sentiment classifier in the target domain. The road-map for rest of the paper is as follows. Section 2 describes the related work. Section 3 describes the extraction of the SCP and the ensemble-based adaptation algorithm. Section 4 elaborates the dataset and the experimental protocol. Section 5 presents the results and section 6 reports the error analysis. Section 7 concludes the paper.3 2SCP words are words which are significant in both the domains with consistent polarity orientation. 3Majority of this work is done at Conduent Labs India till February 2016. 2 Related Work The most significant efforts in the learning of transferable knowledge for cross-domain text classification are Structured Correspondence Learning (SCL) (Blitzer et al., 2007) and Structured Feature Alignment (SFA) (Pan et al., 2010). SCL aims to learn the co-occurrence between features from the two domains. It starts with learning pivot features that occur frequently in both the domains. It models correlation between pivots and all other features by training linear predictors to predict presence of pivot features in the unlabeled target domain data. SCL has shown significant improvement over a baseline (shift-unaware) model. SFA uses some domain-independent words as a bridge to construct a bipartite graph to model the co-occurrence relationship between domainspecific words and domain-independent words. Our approach also exploits the concept of cooccurrence (Pan et al., 2010), but we measure the co-occurrence in terms of similarity between context vector of words, unlike SCL and SFA, which literally look for the co-occurrence of words in the corpus. The use of context vector of words in place of words helps to overcome the data sparsity problem (Sharma et al., 2015). Domain adaptation for sentiment classification has been explored by many researchers (Jiang and Zhai, 2007; Ji et al., 2011; Saha et al., 2011; Glorot et al., 2011; Xia et al., 2013; Zhou et al., 2014; Bhatt et al., 2015). Most of the works have focused on learning a shared low dimensional representation of features that can be generalized across different domains. However, none of the approaches explicitly analyses significance and polarity of words across domains. On the other hand, Glorot et al., (2011) proposed a deep learning approach which learns to extract a meaningful representation for each review in an unsupervised fashion. Zhou et al., (2014) also proposed a deep learning approach to learn a feature mapping between cross-domain heterogeneous features as well as a better feature representation for mapped data to reduce the bias issue caused by the crossdomain correspondences. Though deep learning based approaches perform reasonably good, they don’t perform explicit identification and visualization of transferable features across domains unlike SFA and SCL, which output a set of words as transferable (reusable) features. Our approach explicitly determines the words which are equally 970 significant with a consistent polarity across source and target domains. Our results show that the use of SCP words as features identified by our approach leads to a more accurate cross-domain sentiment classifier in the unlabeled target domain. 3 Approach: Cross-domain Sentiment Classification The proposed approach identifies words which are equally significant for sentiment classification with a consistent polarity across source and target domains. These Significant Consistent Polarity (SCP) words make a set of transferable knowledge from the labeled source domain to the unlabeled target domain for cross-domain sentiment analysis. The algorithm further adapts to the unlabeled target domain by learning target domain specific features. The following sections elaborate SCP features extraction (3.1) and the ensemblebased cross-domain adaptation algorithm (3.2). 3.1 Extracting SCP Features The words which are not significant for classification in the labeled source domain, do not transfer useful knowledge to the target domain through a supervised classifier trained in the source domain. Moreover, words that are significant in both the domains, but have different polarity orientation transfer the wrong information to the target domain through a supervised classifier trained in the labeled source domain, which also downgrade the cross-domain performance. Our algorithm identifies the significance and the polarity of all the words individually in their respective domains. Then the words which are significant in both the domains with the consistent polarity orientation are used to initiate the crossdomain adaptation algorithm. The following sections elaborate how the significance and the polarity of the words are obtained in the labeled source and the unlabeled target domains. 3.1.1 Extracting Significant Words with the Polarity Orientation from the Labeled Source Domain Since we have a polarity annotated dataset in the source domain, a statistical test like χ2 test can be applied to find the significance of a word in the corpus for sentiment classification (Cheng and Zhulyn, 2012; Zheng et al., 2004). We have used goodness of fit chi2 test with equal number of reviews in positive and negative corpora. This test is generally used to determine whether sample data is consistent with a null hypothesis.4 Here, the null hypothesis is that the word is equally used in the positive and the negative corpora. The χ2 test is formulated as follows: χ2(w) = ((cw p −µw)2 + (cw n −µw)2)/µw (1) Where, cw p is the observed count of a word w in the positive documents and cw n is the observed count in the negative documents. µw represents an average of the word’s count in the positive and the negative documents. Here, µw is the expected count or the value of the null-hypothesis. There is an inverse relation between χ2 value and the p-value which is probability of the data given null hypothesis is true. In such a case where a word results in a pvalue smaller than the critical p-value (0.05), we reject the null-hypothesis. Consequently, we assume that the word w belongs to a particular class (positive or negative) in the data, hence it is a significant word for classification (Sharma and Bhattacharyya, 2013). Polarity of Words in the Labeled Source Domain: Chi-square test substantiates the statistically significant association of a word with a class label. Based on this association we assign a polarity orientation to a word in the domain. In other words, if a word is found significant by χ2 test, then the exact class of the word is determined by comparing cw p and cw n . For instance, if cw p is higher than cw n , then the word is positive, else negative. 3.1.2 Extracting Significant Words with the Polarity Orientation from the Unlabeled Target Domain Target domain data is unlabeled and hence, χ2 test cannot be used to find significance of the words. However, to obtain SCP words across domains, we take advantage of the fact that we have to identify significance of only those words in the target domain which are already proven to be significant in the source domain. We presume that a word which is significant in the source domain as per χ2 test and occurs with a frequency greater than a certain threshold (θ) in the target domain is significant in the target domain also. countt(significants(w)) > θ ⇒significantt(w) (2) 4http://stattrek.com/chi-square-test/ goodness-of-fit.aspx?Tutorial=AP. 971 Equation (2) formulates the significance test in the unlabeled target (t) domain. Here, function significants assures the significance of the word w in the labeled source (s) domain and countt gives the normalized count of the w in t.5 χ2 test has one key assumption that the expected value of an observed variable should not be less than 5 to be significant. Considering this assumption as a base, we fix the value of θ as 10.6 Polarity of Words in the Unlabeled Target Domain: Generally, in a polar corpus, a positive word occurs more frequently in context of other positive words, while a negative word occurs in context of other negative words (Sharma et al., 2015).7 Based on this hypothesis, we explore the contextual information of a word that is captured well by its context vector to assign polarity to words in the target domain (Rill et al., 2012; Rong, 2014). Mikolov et al., (2013) showed that similarity between context vector of words in vicinity such as ‘go’ and ‘to’ is higher compared to distant words or words that are not in the neighborhood of each other. Here, the observed concept is that if a word is positive, then its context vector learned from the polar review corpus will give higher cosine-similarity with a known positive polarity word in comparison to a known negative polarity word or vice versa. Therefore, based on the cosine-similarity scores we can assign the label of the known polarity word to the unknown polarity word. We term known polarity words as Positivepivot and Negative-pivot. Context Vector Generation: To compute context vector (conV ec) of a word (w), we have used publicly available word2vec toolkit with the skip-gram model (Mikolov et al., 2013).8 In this model, each word’s Huffman code is used as an input to a log-linear classifier with a continuous projection layer and words within a given window are predicted (Faruqui et al., 2014). We construct a 100 dimensional vector for each can5Normalized count of w in t shows the proportion of occurrences of w in t. 6We tried with smaller values of theta also, but they were not found as effective as theta value of 10 for significant words identification. 7For example, ‘excellent’ will be used more often in positive reviews in comparison to negative reviews, hence, it would have more positive words in its context. Likewise, ‘terrible’ will be used more frequently in negative reviews in comparison to positive reviews, hence, it would have more negative words in its context. 8 Available at: https://radimrehurek.com/ gensim/models/word2vec.html didate word from the unlabeled target domain data. The decision method given in Equation 3 defines the polarity assignment to the unknown polarity words of the target domain. If a word w gives a higher cosine-similarity with the PosPivot (Positive-pivot) than the NegPivot (Negative-pivot), the decision method assigns the positive polarity to the word w, else negative polarity to the word w. If(cosine(conV ec(w), conV ec(PosPivot)) > cosine(conV ec(w), conV ec(NegPivot))) ⇒Positive If(cosine(conV ec(w), conV ec(PosPivot)) < cosine(conV ec(w), conV ec(NegPivot))) ⇒Negative (3) Pivot Selection Method: We empirically observed that a polar word which has the highest frequency in the corpus gives more coverage to estimate the polarity orientation of other words while using context vector. Essentially, the frequent occurrence of the word in the corpus allows it to be in context of other words frequently. Therefore a polar word having the highest frequency in the target domain is observed to be more accurate as pivot for identification of polarity of input words.9 Table 1 shows the examples of a few words in the electronics domain whose polarity orientation is derived based on the similarity scores obtained with PosPivot and NegPivot words in the electronics domain. Transferable Knowledge: The proposed algorithm uses the above mentioned techniques to identify the significance and the polarity of words in the labeled source data (cf. Section 3.1.1) and the unlabeled target data (cf. Section 3.1.2). The words which are found significant in both the domains with the same polarity orientation form a set of SCP features for cross-domain sentiment classification. The weights learned for the SCP features in the labeled source domain by the classification algorithm can be reused for sentiment classification in the unlabeled target domain as SCP features have consistent impacts in both the domains. 9 To obtain the highest frequency based pivots, words in the target corpus (unlabeled) were ordered based on their frequency in the corpus, then a few top words were manually observed (by three human annotators) to pick out a positive word and a negative word. The positive and negative polarity of pivots were confirmed manually to get rid of random high frequency words (for example, function words). These highest frequency polar words were set as Positive-pivot and Negative-pivot. 972 Word Great Poor Polarity Noisy 0.03 0.24 Neg Crap 0.04 0.28 Neg Weak 0.05 0.21 Neg Defective 0.21 0.70 Neg Sturdy 0.43 0.04 Pos Durable 0.44 0.00 Pos Perfect 0.48 0.20 Pos Handy 0.60 0.21 Pos Table 1: Cosine-similarity scores with PosPivot (great) and NegPivot (poor), and inferred polarity orientation of the words. Symbol Description s, t Represent Source (s) and Target (t) respectively l, u Represent labeled and unlabeled respectively Dl s, Du t Represent Dataset in s and t domains respectively Vs, Vt Vocabularies of words in the s and t respectively ri s,ri t ith review in Dl s and Du t respectively sigP ol() Identifies significant words with their polarity f Set of features SVM Implemented classification algorithm Cs Classifier Cs is trained on Dl s with SCP as features Rn t Top-n reviews in t as per classification score by Cs Ct Classifier Ct is trained on Rt n unigrams() Gives bag-of-words Ws, Wt Weights for Cs and Ct respectively WSM Weighted Sum Model Table 2: Notations used in the paper 3.2 Ensemble-based Cross-domain Adaptation Algorithm Apart from the transferable SCP words (Obtained in Section 3.1), each domain has specific discriminating words which can be discovered only from that domain data. The proposed cross-domain adaptation approach (Algorithm 1) attempts to learn such domain specific features from the target domain using a classifier trained on SCP words in the source domain. An ensemble of the classifiers trained on the SCP features (transferred from the source) and domain specific features (learned within the target) further enhances the cross-domain performance. Table 2 lists the notations used in the algorithm. The working of the cross-domain adaptation algorithm is as follows: 1. Identify SCP features from the labeled source and the unlabeled target domain data. 2. A SVM based classifier is trained on SCP words as features using labeled source domain data, named as Cs. 3. The classifier Cs is used to predict the labels for the unlabeled target domain instances Du t , and the confidently predicted instances of Du t form a set of pseudo labeled instances Rn t . 4. A SVM based classifier is trained on the pseudo labeled target domain instances Rn t , using unigrams in Rn t as features to include the target specific words, this classifier is named as Ct . 5. Finally, a Weighted Sum Model (WSM) of Cs and Ct gives a classifier in the target domain. The confidence in the prediction of Du t is measured in terms of the classification-score of the document, i.e., the distance of the input document from the separating hyper-plane given by the SVM classifier (Hsu et al., 2003). The top n confidently predicted pseudo labeled instances (Rn t ) are used to train classifier Ct, where n depends on a threshold that is empirically set to | ± 0.2|.10 The classifier Cs trained on the SCP features (transferred knowledge) from the source domain and the classifier Ct trained on self-discovered target specific features from the pseudo labeled target domain instances bring in complementary information from the two domains. Therefore, combining Cs and Ct in a weighted ensemble (WSM) further enhances the cross-domain performance. Algorithm 1 gives the pseudo code of the proposed adaptation approach. Input: Dl s = {r1 s, r2 s, r3 s, ....rj s}, Du t = {r1 t , r2 t , r3 t , ....rk t }, Vs = {w1 s, w2 s, w3 s, ....wp s}, Vt = {w1 t , w2 t , w3 t , ....wq t } Output: Sentiment Classifier in the Target Domain 1: SCP = sigPol(Dl s) ∩sigPol(Du t ) 2: Cs = Train-SVM(Dl s), where f = SCP 3: Predict Label: Cs(Du t ) →Dl t 4: Select: Rn t | ∀ri t ∈Du t , Cs(ri t) > φ, where i ∈ {1, 2....k} and n <= k 5: Ct = Train-SVM(Rn t ), where f = {unigrams(Rn t )} 6: WSM = (Cs ∗Ws + Ct ∗Wt)/(Ws + Wt) 7: Sentiment Classifier in the Target Domain = WSM ALGORITHM 1: Building of target domain classifier from the source domain Weighted Sum Model (WSM): The weighted ensemble of classifiers helps to overcome the er10Balamurali et al., (2013) have shown that 350 to 400 labeled documents are required to get a high accuracy classifier in a domain using supervised classification techniques, but beyond 400 labeled documents there is not much improvement in the classification accuracy. Hence, threshold on classification score is set such that it can give a sufficient number of documents for supervised classification. Threshold |±0.2| gives documents between 350 to 400. 973 rors produced by the individual classifier. The formulation of WSM is given in step-6 of the Algorithm 1. If Cs has wrongly predicted a document at boundary point and Ct has predicted the same document confidently, then weighted sum of Cs and Ct predicts the document correctly or vice versa. For example, a document is classified by Cs as negative (wrong prediction) with a classification-score of −0.07, while the same document is classified by Ct as positive (correct prediction) with a classification-score of 0.33, the WSM of Cs and Ct will classify the document as positive with a classification-score of 0.12 (Equation 4). WSM = (−0.07 ∗0.765 + 0.33 ∗0.712) (0.765 + 0.712) = 0.12 (4) Here 0.765 and 0.712 are the weights Ws and Wt to the classifiers Cs and Ct respectively. Weights to the Classifiers in WSM: The weights Ws and Wt are the classification accuracies obtained by Cs and Ct respectively on the crossvalidation data from the target domain. The weights Ws and Wt allow Cs and Ct to participate in the WSM in proportion of their accuracy on the cross-validation data. This restriction facilitates the domination of the classifier which is more accurate. 4 Dataset & Experimental Protocol In this paper, we show comparison between SCPbased domain adaptation (our approach) and SCLbased domain adaptation approach proposed by Bhatt el al. (2015) using four domains, viz., Electronics (E), Kitchen (K), Books (B), and DVD.11 We use SVM algorithm with linear kernel (Tong and Koller, 2002) to train a classifier in all the mentioned classification systems in the paper. To implement SVM algorithm, we have used the publicly available Python based Scikit-learn package (Pedregosa et al., 2011).12 Data in each domain is divided into three parts, viz., train (60%), validation (20%) and test (20%). The SCP words are extracted from the training data. The weights WS and Wt for the source and target classifiers are essentially accuracies obtained by Cs and Ct 11The same multi-domain dataset is used by Bhatt et al. (2015), available at: http://www.cs.jhu.edu/ ˜mdredze/datasets/sentiment/index2.html. 12Available at: http://scikit-learn.org/ stable/modules/svm.html. respectively on validation dataset from the target domain. We report the accuracy for all the systems on the test data. Table 3 shows the statistics of the dataset. Domain No. of Reviews Avg. Length Electronic (E) 2000 110 words Kitchen (K) 2000 93 words Books (B) 2000 173 words DVD (D) 2000 197 words Table 3: Dataset statistics 5 Results In this paper, we compare our approach with Structured Correspondence Learning (SCL) and common unigrams. SCL is used by Bhatt et al., (2015) for identification of transferable information from the labeled source domain to the unlabeled target domain for cross-domain sentiment analysis. They showed that transferable features extracted by SCL provide a better cross-domain sentiment analysis system than the transferable features extracted by Structured Feature Alignment (Pan et al., 2010). The SCL-based sentiment classifier in the target domain proposed by Bhatt et. al., (2015) is state-of-the-art for cross-domain sentiment analysis. On the other hand, common unigrams of the source and target are the most visible transferable information.13 Gold standard SCP words: Chi-square test gives us significance and polarity of the word in the corpus by taking into account the polarity labels of the reviews. Application of chi-square test in both the domains, considering that the target domain is also labeled, gives us gold standard SCP words. There is no manual annotation involved. F-score for SCP Words Identification Task: The set of SCP words represent the usable information across domains for cross-domain classification, hence we compare the F-score for the SCP words identification task obtained with our approach, SCL and common-unigrams in Figure 1. It demonstrates that our approach gives a huge improvement in the F-score over SCL and common unigrams for all the 12 pairs of the source and target domains. To measure the statistical significance of this improvement, we applied t-test on the F-score distribution obtained with our approach, SCL and common unigrams. t-test is a 13Common unigrams is a set of unique words which appear in both the domains. 974 statistical significance test. It is used to determine whether two sets of data are significantly different or not.14 Our approach performs significantly better than SCL and common unigrams, while SCL performs better than common unigrams as per ttest. Comparison among Cs, Ct and WSM: Table 4 shows the comparison among classifiers obtained in the target domain using SCP given by our approach, SCL, common-unigrams, and gold standard SCP for electronics as the source and movie as the target domains. Since electronics and movie are two very dissimilar domains in terms of domain specific words, unlike, books and movie, getting a high accuracy classifier in the movie domain from the electronics domain is a challenging task (Pang et al., 2002). Therefore, in Table 4 results are reported with electronics as the source domain and movie as the target domain.15 In all four cases, there is difference in the transferred information from the source to the target, but the ensemblebased classification algorithm (Section 3.2) is the same. Table 4 depicts sentiment classification accuracy obtained with Cs, Ct and WSM. The weights Ws and Wt in WSM are normalized accuracies by Cs and Ct respectively on the validation set from the target domain. The fourth column (size) represents the feature set size. We observed that WSM gives the highest accuracy, which validates our assumption that a weighted sum of two classifiers is better than the performance of individual classifiers. The WSM accuracy obtained with SCP words given by our approach is comparable to the accuracy obtained with gold standard SCP words. The motivation of this research is to learn shared representation cognizant of significant and polarity changing words across domains. Hence, we report cross-domain classification accuracy obtained with three different types of shared representations (transferable knowledge), viz., common-unigrams, SCL and our approach.16 System-1, system-2 and system-3 in Table 5 show the final cross-domain sentiment classification accuracy obtained with WSM in the target domain 14The detail about the test is available at: http://www. socialresearchmethods.net/kb/stat_t.php. 15The movie review dataset is a balanced corpus of 2000 reviews. Available at: http://www.cs.cornell. edu/people/pabo/movie-review-data/ 16The reported accuracy is the ratio of correctly predicted documents to that of the total number of documents in the test dataset. Ws & Wt Features Size Acc Cs SCP (Our approach) 296 75.0 Ct Unigrams 4751 74.3 WSM 0.72 & 0.69 77.5 Cs SCL 1000 66.8 Ct Unigrams 4615 68.0 WSM 0.63 & 0.61 69.3 Cs CommonUnigrams 2236 64.0 Ct Unigrams 4236 64.0 WSM 0.62 & 0.58 65 Cs SCP (Gold standard) 163 77.0 Ct Unigrams 1183 78.5 WSM 0.73 & 0.75 80.0 Table 4: Classification accuracy in % given by Cs, Ct and WSM with different feature sets for electronics as source and movie as target. for 12 pairs of source and target using commonunigrams, SCL and our approach respectively. System-1: This system considers commonunigrams of both the domains as shared representation. System-2: It differs from system-1 in the shared representation, which is learned using Structured Correspondence Learning (SCL) (Bhatt et al., 2015) to initiate the process. System3: This system implements the proposed domain adaptation algorithm. Here, the shared representation is the SCP words and the ensemble-based domain adaptation algorithm (Section 3.2) gives the final classifier in the target domain. Table 5 depicts that the system-3 is better than system-1 and system-2 for all pairs, except K to B and B to D. For these two pairs, system-2 performs better than system-3, though the difference in accuracy is very low (below 1%). To enhance the final accuracy in the target domain, Bhatt et al., (2015) performed iterations over the pseudo labeled target domain instances (Rn t ). In each iteration, they obtained a new Ct trained on increased number of pseudo labeled documents. This process is repeated till all the training instances of the target domain are considered. The Ct obtained in the last iteration makes WSM with Cs which is trained on the transferable features given by SCL. Bhatt et al., (2015) have shown that iteration-based domain adaptation technique is more effective than one-shot 975 Figure 1: F-score for SCP words identification task (Source →Target) with respect to gold standard SCP words. System-1 System-2 System-3 System-4 System-5 System-6 D→B 62 64.2 67 66 76.5 78.5 E→B 63 58.9 68.3 67 75.6 76.3 K→B 67 68.75 67.85 69 71.2 74 B→D 76 81 80.5 77 81.5 81.5 E→D 68 71 77.5 71.5 74 80.5 K→D 69 69 74 71 75.2 77 B→E 68 66 73 69 79 81.2 D→E 61 62 74 66 73.2 74.2 K→E 76 75.75 80 78 81 82 B→K 66 67.5 72 69 79.2 80.5 D→K 65.75 67 71 66 80 81 E→K 74.25 75 85.75 76 84 85.75 Table 5: Cross-domain sentiment classification accuracy in the target domain (Source (S) →Target (T)). Domain Significant Words Unigrams Books (B) 76 89 DVD (D) 82.5 84 Electronics (E) 82.5 85 Kitchen (K) 82.5 86 Table 6: In-domain sentiment classification accuracy using significant words and unigrams. adaptation approaches. System-4, system-5, and system-6 in Table 5 incorporate the iterative process into system-1, system-2, and system-3 respectively. We observed the same trend after the inclusion of the iterative process also, as the SCPbased system-6 performed the best in all 12 cases. On the other hand, SCL-based system-5 performs better than the common-unigrams based system4. Table 7 shows the results of significance test (ttest) performed on the accuracy distributions produced by the six different systems. The noticeable point is that the iterations over SCL (system5) and our approach (system-6) narrow down the difference in the accuracy between system-2 and system-3 as system-2 and system-3 have a statistically significant difference in accuracy with the p-value of 0.039 (Row-4 of Table 7), but the difference between system-5 and system-6 is not statistically significant. Essentially, system-3 does not give much improvement with iterations, unlike system-2. In other words, addition of the iterative process with the shared representation given by SCL overcomes the errors introduced by SCL. On the other hand, SCP given by our approach were able to produce a less erroneous system in oneshot. Table 6 shows the in-domain sentiment classification accuracy obtained with unigrams and significant words as features considering labeled data in the domain. System-6 tries to equalize the in-domain accuracy obtained with unigrams. 976 P-value Significant? sys1 vs sys2 0.719 No sys1 vs sys3 0.011 Yes sys2 vs sys3 0.039 Yes sys1 vs sys4 0.219 No sys2 vs sys4 0.467 No sys3 vs sys4 0.090 No sys1 vs sys5 0 Yes sys2 vs sys5 0 Yes sys3 vs sys5 0.101 No sys4 vs sys5 0 Yes sys1 vs sys6 0 Yes sys2 vs sys6 0 Yes sys3 vs sys6 0.0130 Yes sys4 vs sys6 0 Yes sys5 vs sys6 0.231 No Table 7: t-test (α = 0.05) results on the difference in accuracy produced by various systems (cf. Table 5). To validate our assertion that polarity preserving significant words (SCP) across source and target domains make a less erroneous set of transferable knowledge from the source domain to the target domain, we computed Pearson productmoment correlation between F-score obtained for our approach (cf. Figure 1) and cross-domain accuracy obtained with SCP (System-3, cf. Table 5). We observed a strong positive correlation (r) of 0.78 between F-score and cross-domain accuracy. Essentially, an accurate set of SCP words positively stimulates an improved classifier in the unlabeled target domain. 6 Error Analysis The pairs of domains which share a greater number of domain-specific words, result in a higher accuracy cross-domain classifier. For example, Electronics (E) and Kitchen (K) domains share many domain-specific words, hence pairing of such similar domains as the source and the target results into a higher accuracy classifier in the target domain. Table 5 shows that K→E outperforms B→E and D→E, and E→K outperforms B→K and D→K. On the other hand, DVD (D) and electronics are two very different domains unlike electronics and Kitchen, or DVD and books. The DVD dataset contains reviews about the music albums. This difference in types of reviews makes them to share less number of words. Table 8 shows the percent (%) of common words among the 4 domains. The percent of common unique words are common unique words divided by the summation of unique words in the domains individually. E - D E - K E - B D - K D - B K - B 15 22 17 14 22 17 Table 8: Common unique words between the domains in percent (%). 7 Conclusion In this paper, we proposed that the Significant Consistent Polarity (SCP) words represent the transferable information from the labeled source domain to the unlabeled target domain for crossdomain sentiment classification. We showed a strong positive correlation of 0.78 between the SCP words identified by our approach and the sentiment classification accuracy achieved in the unlabeled target domain. Essentially, a set of less erroneous transferable features leads to a more accurate classification system in the unlabeled target domain. We have presented a technique based on χ2 test and cosine-similarity between context vector of words to identify SCP words. Results show that the SCP words given by our approach represent more accurate transferable information in comparison to the Structured Correspondence Learning (SCL) algorithm and common-unigrams. Furthermore, we show that an ensemble of the classifiers trained on the SCP features and target specific features overcomes the errors of the individual classifiers. References S Al-Harbi, A Almuhareb, A Al-Thubaity, MS Khorsheed, and A Al-Rajeh. 2008. Automatic arabic text classification. AR Balamurali, Mitesh M Khapra, and Pushpak Bhattacharyya. 2013. Lost in translation: viability of machine translation for cross language sentiment analysis. In Computational Linguistics and Intelligent Text Processing, pages 38–49. Springer. Himanshu S. Bhatt, Deepali Semwal, and S. Roy. 2015. An iterative similarity based adaptation technique for cross-domain text classification. In Proceedings of Conference on Natural Language Learning, pages 52–61. Pushpak Bhattacharyya. 2015. Multilingual Projections. Springer International Publishing, Cham. John Blitzer, Mark Dredze, Fernando Pereira, et al. 2007. Biographies, bollywood, boom-boxes and 977 blenders: Domain adaptation for sentiment classification. In Proceedings of Association for Computational Linguistics, pages 440–447. Eric Breck, Yejin Choi, and Claire Cardie. 2007. Identifying expressions of opinion in context. In Proceedings of International Joint Conference on Artificial Intelligence, pages 2683–2688. Erik Cambria, Bjorn Schuller, Yunqing Xia, and Catherine Havasi. 2013. New avenues in opinion mining and sentiment analysis. IEEE Intelligent Systems, (2):15–21. Alex Cheng and Oles Zhulyn. 2012. A system for multilingual sentiment learning on large data sets. In Proceedings of International Conference on Computational Linguistics, pages 577–592. Yoonjung Choi, Youngho Kim, and Sung-Hyon Myaeng. 2009. Domain-specific sentiment analysis using contextual feature generation. In Proceedings of the 1st international CIKM workshop on Topicsentiment analysis for mass opinion, pages 37–44. ACM. Andrea Esuli and Fabrizio Sebastiani. 2005. Determining the semantic orientation of terms through gloss classification. In Proceedings of International Conference on Information and Knowledge Management, pages 617–624. Angela Fahrni and Manfred Klenner. 2008. Old wine or warm beer: Target-specific sentiment analysis of adjectives. In Proc. of the Symposium on Affective Language in Human and Machine, AISB, pages 60– 63. Manaal Faruqui, Jesse Dodge, Sujay K Jauhar, Chris Dyer, Eduard Hovy, and Noah A Smith. 2014. Retrofitting word vectors to semantic lexicons. arXiv preprint arXiv:1411.4166. Xavier Glorot, Antoine Bordes, and Yoshua Bengio. 2011. Domain adaptation for large-scale sentiment classification: A deep learning approach. In Proceedings of the 28th International Conference on Machine Learning (ICML-11), pages 513–520. Chih-Wei Hsu, Chih-Chung Chang, and Chih-Jen Lin. 2003. A practical guide to support vector classification. Technical report, Department of Computer Science, National Taiwan University. Yang-Sheng Ji, Jia-Jun Chen, Gang Niu, Lin Shang, and Xin-Yu Dai. 2011. Transfer learning via multiview principal component analysis. Journal of Computer Science and Technology, 26(1):81–98. Jing Jiang and ChengXiang Zhai. 2007. Instance weighting for domain adaptation in nlp. In Proceedings of Association for Computational Linguistics, pages 264–271. Hiroshi Kanayama and Tetsuya Nasukawa. 2006. Fully automatic lexicon expansion for domain-oriented sentiment analysis. In Proceedings of Conference on Empirical Methods in Natural Language Processing, pages 355–363. Tao Li, Yi Zhang, and Vikas Sindhwani. 2009. A nonnegative matrix tri-factorization approach to sentiment classification with lexical prior knowledge. In Proceedings of International Joint Conference on Natural Language Processing, pages 244–252. Tomas Mikolov, Kai Chen, Greg Corrado, and Jeffrey Dean. 2013. Efficient estimation of word representations in vector space. CoRR. Michael Oakes, Robert Gaaizauskas, Helene Fowkes, Anna Jonsson, Vincent Wan, and Micheline Beaulieu. 2001. A method based on the chi-square test for document classification. In Proceedings of the 24th annual international ACM SIGIR conference on Research and development in information retrieval, pages 440–441. ACM. Sinno Jialin Pan, Xiaochuan Ni, Jian-Tao Sun, Qiang Yang, and Zheng Chen. 2010. Cross-domain sentiment classification via spectral feature alignment. In Proceedings of International Conference on World Wide Web, pages 751–760. Bo Pang and Lillian Lee. 2008. Opinion mining and sentiment analysis. Foundations and Trends in Information Retrieval, 2(1-2):1–135. Bo Pang, Lillian Lee, and Shivakumar Vaithyanathan. 2002. Thumbs up?: sentiment classification using machine learning techniques. In Proceedings of Conference on Empirical Methods in Natural Language Processing, pages 79–86. Fabian Pedregosa, Ga¨el Varoquaux, Alexandre Gramfort, Vincent Michel, Bertrand Thirion, Olivier Grisel, Mathieu Blondel, Peter Prettenhofer, Ron Weiss, Vincent Dubourg, et al. 2011. Scikit-learn: Machine learning in python. Journal of Machine Learning Research, 12(Oct):2825–2830. Rudy Prabowo and Mike Thelwall. 2009. Sentiment analysis: A combined approach. Journal of Informetrics, 3(2):143–157. Guang Qiu, Bing Liu, Jiajun Bu, and Chun Chen. 2009. Expanding domain sentiment lexicon through double propagation. In IJCAI, volume 9, pages 1199– 1204. Sven Rill, J¨org Scheidt, Johannes Drescher, Oliver Sch¨utz, Dirk Reinel, and Florian Wogenstein. 2012. A generic approach to generate opinion lists of phrases for opinion mining applications. In Proceedings of the First International Workshop on Issues of Sentiment Discovery and Opinion Mining, page 7. ACM. Xin Rong. 2014. word2vec parameter learning explained. arXiv preprint arXiv:1411.2738. 978 Sara Rosenthal, Preslav Nakov, Alan Ritter, and Veselin Stoyanov. 2014. Semeval-2014 task 9: Sentiment analysis in twitter. In Proceedings of SemEval, pages 73–80. Avishek Saha, Piyush Rai, Hal Daum´e III, Suresh Venkatasubramanian, and Scott L DuVall. 2011. Active supervised domain adaptation. In Machine Learning and Knowledge Discovery in Databases, pages 97–112. Raksha Sharma and Pushpak Bhattacharyya. 2013. Detecting domain dedicated polar words. In Proceedings of the International Joint Conference on Natural Language Processing, pages 661–666. Raksha Sharma and Pushpak Bhattacharyya. 2015. Domain sentiment matters: A two stage sentiment analyzer. In Proceedings of the International Conference on Natural Language Processing. Raksha Sharma, Mohit Gupta, Astha Agarwal, and Pushpak Bhattacharyya. 2015. Adjective intensity and sentiment analysis. In Proceedings of Conference on Empirical Methods in Natural Language Processing. Maite Taboada, Julian Brooke, Milan Tofiloski, Kimberly Voll, and Manfred Stede. 2011. Lexicon-based methods for sentiment analysis. Computational Linguistics, 37(2):267–307. Simon Tong and Daphne Koller. 2002. Support vector machine active learning with applications to text classification. The Journal of Machine Learning Research, 2:45–66. Peter D. Turney. 2002. Thumbs up or thumbs down?: Semantic orientation applied to unsupervised classification of reviews. In Proceedings of Association for Computational Linguistics, pages 417–424. Rui Xia, Chengqing Zong, Xuelei Hu, and Erik Cambria. 2013. Feature ensemble plus sample selection: domain adaptation for sentiment classification. IEEE Intelligent Systems, 28(3):10–18. Zhaohui Zheng, Xiaoyun Wu, and Rohini Srihari. 2004. Feature selection for text categorization on imbalanced data. ACM Sig KDD Explorations Newsletter, 6(1):80–89. Joey Tianyi Zhou, Sinno Jialin Pan, Ivor W Tsang, and Yan Yan. 2014. Hybrid heterogeneous transfer learning through deep learning. In AAAI, pages 2213–2220.
2018
89
Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Long Papers), pages 87–96 Melbourne, Australia, July 15 - 20, 2018. c⃝2018 Association for Computational Linguistics 87 Ultra-Fine Entity Typing Eunsol Choi† Omer Levy† Yejin Choi†♯ Luke Zettlemoyer† †Paul G. Allen School of Computer Science & Engineering, University of Washington ♯Allen Institute for Artificial Intelligence, Seattle WA {eunsol,omerlevy,yejin,lsz}@cs.washington.edu Abstract We introduce a new entity typing task: given a sentence with an entity mention, the goal is to predict a set of free-form phrases (e.g. skyscraper, songwriter, or criminal) that describe appropriate types for the target entity. This formulation allows us to use a new type of distant supervision at large scale: head words, which indicate the type of the noun phrases they appear in. We show that these ultra-fine types can be crowd-sourced, and introduce new evaluation sets that are much more diverse and fine-grained than existing benchmarks. We present a model that can predict open types, and is trained using a multitask objective that pools our new head-word supervision with prior supervision from entity linking. Experimental results demonstrate that our model is effective in predicting entity types at varying granularity; it achieves state of the art performance on an existing fine-grained entity typing benchmark, and sets baselines for our newly-introduced datasets.1 1 Introduction Entities can often be described by very fine grained types. Consider the sentences “Bill robbed John. He was arrested.” The noun phrases “John,” “Bill,” and “he” have very specific types that can be inferred from the text. This includes the facts that “Bill” and “he” are both likely “criminal” due to the “robbing” and “arresting,” while “John” is more likely a “victim” because he was “robbed.” Such fine-grained types (victim, criminal) are important for context-sensitive tasks such 1Our data and model can be downloaded from: http://nlp.cs.washington.edu/entity_type Sentence with Target Entity Entity Types During the Inca Empire, {the Inti Raymi} was the most important of four ceremonies celebrated in Cusco. event, festival, ritual, custom, ceremony, party, celebration {They} have been asked to appear in court to face the charge. person, accused, suspect, defendant Ban praised Rwanda’s commitment to the UN and its role in {peacemaking operations}. event, plan, mission, action Table 1: Examples of entity mentions and their annotated types, as annotated in our dataset. The entity mentions are bold faced and in the curly brackets. The bold blue types do not appear in existing fine-grained type ontologies. as coreference resolution and question answering (e.g. “Who was the victim?”). Inferring such types for each mention (John, he) is not possible given current typing models that only predict relatively coarse types and only consider named entities. To address this challenge, we present a new task: given a sentence with a target entity mention, predict free-form noun phrases that describe appropriate types for the role the target entity plays in the sentence. Table 1 shows three examples that exhibit a rich variety of types at different granularities. Our task effectively subsumes existing finegrained named entity typing formulations due to the use of a very large type vocabulary and the fact that we predict types for all noun phrases, including named entities, nominals, and pronouns. Incorporating fine-grained entity types has improved entity-focused downstream tasks, such as relation extraction (Yaghoobzadeh et al., 2017a), question answering (Yavuz et al., 2016), query analysis (Balog and Neumayer, 2012), and coreference resolution (Durrett and Klein, 2014). These systems used a relatively coarse type ontology. However, manually designing the ontology is a challenging task, and it is difficult to cover all pos88 12/14/2017 https://homes.cs.washington.edu/~eunsol/finetype_visualization/onto_index.html https://homes.cs.washington.edu/~eunsol/finetype_visualization/onto_index.html 1/1 person litical_figu legal artist autho title organization company governmen other news perso location country city structure other anizat ograp ernm ans other legal product currency event person art writing ng_th food ealt and_ atm adc car a) Our Dataset b) OntoNotes c) FIGER Other 12/15/2017 https://homes.cs.washington.edu/~eunsol/finetype_visualization/ours_index.html https://homes.cs.washington.edu/~eunsol/finetype_visualization/ours_index.html 1/1 person organization event object leader group adult place male location politician administration area time government concept company athlete country region nation team idea man official state institution authority business ocial_grou professiona pokesperso city position player female agency performe period orporatio ontestan enterprise activity presiden pokesma woman nsequen firm worker action rganism writer unicipal mmunica expert date resenta space actor musicia nesspe citizen speake crimina unit ministra sociati rvicem party artist inessm report point angem ownshi calizati tructur tertain day appeni contes eporte ocume mploye elebrit show ormati directo year mmitt blicati conflic cientis sport coach uildin tuatio military game ball_p soldie friend music office reato chola law aceme song army roduc atherin money mpetit force anima meetin ateme ecord dividu risone squad nalys victim licem uratio battle essag avele plan singe aract title dar_m name reem voca ndida club child semb nditio bstan movie land docto ry_se ceho tuden istric ateri raine maste evic fenda ogra urnal vesto atien policy writin ontra rrito ystem llplay sast choo rtain match erati act oble attac opos food war amily ache rule mpaig agen astro hang aim plom eekd ruler film ndma oubl part mmu us_l olitic wspa gisla ubjec ctres album data oces vers mou hoic rrori elativ plan demi fund ppor dy_p umb own nam marke awye ecut ecisio ress belie e_e visio lam achi rren uca upa tesm olleg home cuss esu aren inist book work vern slat uipm tatu islat nanc eath g_th agu etwo fere easu onte orma car eatio body cce inne mmo pos gme band er_p lica two job rive pous play rugg nes gotia dea ufac owe spe new case ligio easo emb peec head uni ecta r_ve hing stitu udg of_e elin nom ucat rticl figh gine ehic _of_ ome matte cash wife crip tem ounc oduc urre ove toc llag res sea ders cide oss king mpa lop girl ilwa ctio ect vote quid fes pinio ssu spu mic_ nio ncti opic row site dus sour emi pita stig rum bill risi mba war ian apit eat kpl ecti oth s_p ady anc gua ace ocie tem sto star ketp serv roo liat ugh our oad oun ontr ow ion ouri tru tne dic nce onv adc _tra dm cus gat soci mina torm ubl men fug mpu _ag arri stiv ours nda lan utho wm ont dvis por oje ncip alu ent atm nn zen prize ono olic erv tor spo den ngd tain vem boa ula fact olla rrat ial_ ppo pro dic sw ea olfe apo ess no goa iva ven ers erie olit icip stom ailur ace role il_lpio bird proa rlin mst sum ctio of_ twa path ud em drug ffic hea wne gov ghte vilia eci al_ an oar ave rga ffo mpo abit erc clu age pla po me ord ecia m_s osi stig for e_n eke on_ ship tion c_g ssi tary staf rga rim pas era phic ons ain ric hiev spl boy sum mp cos an rofi pe clo hurc al_s og cin cili ma ficu art nc otia atis ver tifa an ese nci dca auc min ced up com lec gd gym ort riti per athe _se elo ale oo pta uti ma mo ndin ourc adm od las eit are nce res oup atio n_c em ha fall uild dish cor gd adiu ex rm ini ak ric na ate har oda pula orm ent n_ gs_ ndin ack eac lid urn er mp ac ran gre otio nce erm vin alit nem um rai unt gaz or side mis aso ote rm de ect co as hav ffa exc eat thi aid pai ate xt_ cto _p visi fer ou na an erw rtn ce rat ec cip b_s du oun ltu ar mb etr _sc erta ho ote titu cto nt ec ruc es ut wgi ma cul em de grid od re urg rie cov ive oph gre rcr act son ab ve tfo bra ent on ric nsi ws erv fire er ape ally eal an crip int al_ mig nec ual tra bby an olu ine dre ya _m roo ho ua isk os rts ran eta tor on ea sba ud mea xhi pre otb seb ea se wo tu oy ativ ym ou sca ot end stm alk len nn ov nis ous oxe wor era plis ha ap los ide ille m lita p_a duc ou ree ctio ne dic on usa er nk ie rg ess ra artm let ass sta win tro al an ron ma oya pe mu med ad oth no es blin urn giti ter gw ay ers pa se ctu ish sw ns hw rde ria sh es ist ag sen ea mm sh ch gr ng tis pa ere tin ve re up ar_ ve_ et tdo tra ens co ou uis ar wo dit _in ad ell em arn ilo fe jet ctu sp rea et si en ui eig rde lua ta mm ge pro se an nd ge rn ca aw 12/15/2017 https://homes.cs.washington.edu/~eunsol/finetype_visualization/onto_index.html https://homes.cs.washington.edu/~eunsol/finetype_visualization/onto_index.html 1/1 /other /person /organization /location /company /legal /city /country 12/15/2017 https://homes.cs.washington.edu/~eunsol/finetype_visualization/figer_index.html https://homes.cs.washington.edu/~eunsol/finetype_visualization/figer_index.html 1/1 /person /organization /location /time /company ational_ins /event /building /art itten_wo /country nment_a Person Other Person Figure 1: A visualization of all the labels that cover 90% of the data, where a bubble’s size is proportional to the label’s frequency. Our dataset is much more diverse and fine grained when compared to existing datasets (OntoNotes and FIGER), in which the top 5 types cover 70-80% of the data. sible concepts even within a limited domain. This can be seen empirically in existing datasets, where the label distribution of fine-grained entity typing datasets is heavily skewed toward coarse-grained types. For instance, annotators of the OntoNotes dataset (Gillick et al., 2014) marked about half of the mentions as “other,” because they could not find a suitable type in their ontology (see Figure 1 for a visualization and Section 2.2 for details). Our more open, ultra-fine vocabulary, where types are free-form noun phrases, alleviates the need for hand-crafted ontologies, thereby greatly increasing overall type coverage. To better understand entity types in an unrestricted setting, we crowdsource a new dataset of 6,000 examples. Compared to previous fine-grained entity typing datasets, the label distribution in our data is substantially more diverse and fine-grained. Annotators easily generate a wide range of types and can determine with 85% agreement if a type generated by another annotator is appropriate. Our evaluation data has over 2,500 unique types, posing a challenging learning problem. While our types are harder to predict, they also allow for a new form of contextual distant supervision. We observe that text often contains cues that explicitly match a mention to its type, in the form of the mention’s head word. For example, “the incumbent chairman of the African Union” is a type of “chairman.” This signal complements the supervision derived from linking entities to knowledge bases, which is context-oblivious. For example, “Clint Eastwood” can be described with dozens of types, but context-sensitive typing would prefer “director” instead of “mayor” for the sentence “Clint Eastwood won ‘Best Director’ for Million Dollar Baby.” We combine head-word supervision, which provides ultra-fine type labels, with traditional signals from entity linking. Although the problem is more challenging at finer granularity, we find that mixing fine and coarse-grained supervision helps significantly, and that our proposed model with a multitask objective exceeds the performance of existing entity typing models. Lastly, we show that head-word supervision can be used for previous formulations of entity typing, setting the new state-of-the-art performance on an existing finegrained NER benchmark. 2 Task and Data Given a sentence and an entity mention e within it, the task is to predict a set of natural-language phrases T that describe the type of e. The selection of T is context sensitive; for example, in “Bill Gates has donated billions to eradicate malaria,” Bill Gates should be typed as “philanthropist” and not “inventor.” This distinction is important for context-sensitive tasks such as coreference resolution and question answering (e.g. “Which philanthropist is trying to prevent malaria?”). We annotate a dataset of about 6,000 mentions via crowdsourcing (Section 2.1), and demonstrate that using an large type vocabulary substantially increases annotation coverage and diversity over existing approaches (Section 2.2). 89 2.1 Crowdsourcing Entity Types To capture multiple domains, we sample sentences from Gigaword (Parker et al., 2011), OntoNotes (Hovy et al., 2006), and web articles (Singh et al., 2012). We select entity mentions by taking maximal noun phrases from a constituency parser (Manning et al., 2014) and mentions from a coreference resolution system (Lee et al., 2017). We provide the sentence and the target entity mention to five crowd workers on Mechanical Turk, and ask them to annotate the entity’s type. To encourage annotators to generate fine-grained types, we require at least one general type (e.g. person, organization, location) and two specific types (e.g. doctor, fish, religious institute), from a type vocabulary of about 10K frequent noun phrases. We use WordNet (Miller, 1995) to expand these types automatically by generating all their synonyms and hypernyms based on the most common sense, and ask five different annotators to validate the generated types. Each pair of annotators agreed on 85% of the binary validation decisions (i.e. whether a type is suitable or not) and 0.47 in Fleiss’s κ. To further improve consistency, the final type set contained only types selected by at least 3/5 annotators. Further crowdsourcing details are available in the supplementary material. Our collection process focuses on precision. Thus, the final set is diverse but not comprehensive, making evaluation non-trivial (see Section 5). 2.2 Data Analysis We collected about 6,000 examples. For analysis, we classified each type into three disjoint bins: • 9 general types: person, location, object, organization, place, entity, object, time, event • 121 fine-grained types, mapped to fine-grained entity labels from prior work (Ling and Weld, 2012; Gillick et al., 2014) (e.g. film, athlete) • 10,201 ultra-fine types, encompassing every other label in the type space (e.g. detective, lawsuit, temple, weapon, composer) On average, each example has 5 labels: 0.9 general, 0.6 fine-grained, and 3.9 ultra-fine types. Among the 10,000 ultra-fine types, 2,300 unique types were actually found in the 6,000 crowdsourced examples. Nevertheless, our distant supervision data (Section 3) provides positive training examples for every type in the entire vocabulary, and our model (Section 4) can and does predict from a 10K type vocabulary. For example, Figure 2: The label distribution across different evaluation datasets. In existing datasets, the top 4 or 7 labels cover over 80% of the labels. In ours, the top 50 labels cover less than 50% of the data. the model correctly predicts “television network” and “archipelago” for some mentions, even though that type never appears in the 6,000 crowdsourced examples. Improving Type Coverage We observe that prior fine-grained entity typing datasets are heavily focused on coarse-grained types. To quantify our observation, we calculate the distribution of types in FIGER (Ling and Weld, 2012), OntoNotes (Gillick et al., 2014), and our data. For examples with multiple types (|T| > 1), we counted each type 1/|T| times. Figure 2 shows the percentage of labels covered by the top N labels in each dataset. In previous enitity typing datasets, the distribution of labels is highly skewed towards the top few labels. To cover 80% of the examples, FIGER requires only the top 7 types, while OntoNotes needs only 4; our dataset requires 429 different types. Figure 1 takes a deeper look by visualizing the types that cover 90% of the data, demonstrating the diversity of our dataset. It is also striking that more than half of the examples in OntoNotes are classified as “other,” perhaps because of the limitation of its predefined ontology. Improving Mention Coverage Existing datasets focus mostly on named entity mentions, with the exception of OntoNotes, which contained nominal expressions. This has implications on the transferability of FIGER/OntoNotes-based models to tasks such as coreference resolution, which need to analyze all types of entity mentions (pronouns, nominal expressions, and named entity 90 Source Example Sentence Labels Size Prec. Head Words Western powers that brokered the proposed deal in Vienna are likely to balk, said Valerie Lincy, a researcher with the Wisconsin Project. power 20M 80.4% Alexis Kaniaris, CEO of the organizing company Europartners, explained, speaking in a radio program in national radio station NET. radio, station, radio station Entity Linking + Definitions Toyota recalled more than 8 million vehicles globally over sticky pedals that can become entrapped in floor mats. manufacturer 2.7M 77.7% Entity Linking + KB Iced Earth’s musical style is influenced by many traditional heavy metal groups such as Black Sabbath. person, artist, actor, author, musician 2.5M 77.6% Table 2: Distant supervision examples and statistics. We extracted the headword and Wikipedia definition supervision from Gigaword and Wikilink corpora. KB-based supervision is mapped from prior work, which used Wikipedia and news corpora. mentions). Our new dataset provides a wellrounded benchmark with roughly 40% pronouns, 38% nominal expressions, and 22% named entity mentions. The case of pronouns is particularly interesting, since the mention itself provides little information. 3 Distant Supervision Training data for fine-grained NER systems is typically obtained by linking entity mentions and drawing their types from knowledge bases (KBs). This approach has two limitations: recall can suffer due to KB incompleteness (West et al., 2014), and precision can suffer when the selected types do not fit the context (Ritter et al., 2011). We alleviate the recall problem by mining entity mentions that were linked to Wikipedia in HTML, and extract relevant types from their encyclopedic definitions (Section 3.1). To address the precision issue (context-insensitive labeling), we propose a new source of distant supervision: automatically extracted nominal head words from raw text (Section 3.2). Using head words as a form of distant supervision provides fine-grained information about named entities and nominal mentions. While a KB may link “the 44th president of the United States” to many types such as author, lawyer, and professor, head words provide only the type “president”, which is relevant in the context. We experiment with the new distant supervision sources as well as the traditional KB supervision. Table 2 shows examples and statistics for each source of supervision. We annotate 100 examples from each source to estimate the noise and usefulness in each signal (precision in Table 2). 3.1 Entity Linking For KB supervision, we leveraged training data from prior work (Ling and Weld, 2012; Gillick et al., 2014) by manually mapping their ontology to our 10,000 noun type vocabulary, which covers 130 of our labels (general and fine-grained).2 Section 6 defines this mapping in more detail. To improve both entity and type coverage of KB supervision, we use definitions from Wikipedia. We follow Shnarch et al. () who observed that the first sentence of a Wikipedia article often states the entity’s type via an “is a” relation; for example, “Roger Federer is a Swiss professional tennis player.” Since we are using a large type vocabulary, we can now mine this typing information.3 We extracted descriptions for 3.1M entities which contain 4,600 unique type labels such as “competition,” “movement,” and “village.” We bypass the challenge of automatically linking entities to Wikipedia by exploiting existing hyperlinks in web pages (Singh et al., 2012), following prior work (Ling and Weld, 2012; Yosef et al., 2012). Since our heuristic extraction of types from the definition sentence is somewhat noisy, we use a more conservative entity linking policy4 that yields a signal with similar overall accuracy to KB-linked data. 2Data from: https://github.com/ shimaokasonse/NFGEC 3We extract types by applying a dependency parser (Manning et al., 2014) to the definition sentence, and taking nouns that are dependents of a copular edge or connected to nouns linked to copulars via appositive or conjunctive edges. 4Only link if the mention contains the Wikipedia entity’s name and the entity’s name contains the mention’s head. 91 3.2 Contextualized Supervision Many nominal entity mentions include detailed type information within the mention itself. For example, when describing Titan V as “the newlyreleased graphics card”, the head words and phrases of this mention (“graphics card” and “card”) provide a somewhat noisy, but very easy to gather, context-sensitive type signal. We extract nominal head words with a dependency parser (Manning et al., 2014) from the Gigaword corpus as well as the Wikilink dataset. To support multiword expressions, we included nouns that appear next to the head if they form a phrase in our type vocabulary. Finally, we lowercase all words and convert plural to singular. Our analysis reveals that this signal has a comparable accuracy to the types extracted from entity linking (around 80%). Many errors are from the parser, and some errors stem from idioms and transparent heads (e.g. “parts of capital” labeled as “part”). While the headword is given as an input to the model, with heavy regularization and multitasking with other supervision sources, this supervision helps encode the context. 4 Model We design a model for predicting sets of types given a mention in context. The architecture resembles the recent neural AttentiveNER model (Shimaoka et al., 2017), while improving the sentence and mention representations, and introducing a new multitask objective to handle multiple sources of supervision. The hyperparameter settings are listed in the supplementary material. Context Representation Given a sentence x1, . . . , xn, we represent each token xi using a pre-trained word embedding wi. We concatenate an additional location embedding li which indicates whether xi is before, inside, or after the mention. We then use [xi; li] as an input to a bidirectional LSTM, producing a contextualized representation hi for each token; this is different from the architecture of Shimaoka et al. 2017, who used two separate bidirectional LSTMs on each side of the mention. Finally, we represent the context c as a weighted sum of the contextualized token representations using MLP-based attention: ai = SoftMaxi(va · relu(Wahi)) Where Wa and va are the parameters of the attention mechanism’s MLP, which allows interaction between the forward and backward directions of the LSTM before computing the weight factors. Mention Representation We represent the mention m as the concatenation of two items: (a) a character-based representation produced by a CNN on the entire mention span, and (b) a weighted sum of the pre-trained word embeddings in the mention span computed by attention, similar to the mention representation in a recent coreference resolution model (Lee et al., 2017). The final representation is the concatenation of the context and mention representations: r = [c; m]. Label Prediction We learn a type label embedding matrix Wt ∈Rn×d where n is the number of labels in the prediction space and d is the dimension of r. This matrix can be seen as a combination of three sub matrices, Wgeneral, Wfine, Wultra, each of which contains the representations of the general, fine, and ultra-fine types respectively. We predict each type’s probability via the sigmoid of its inner product with r: y = σ(Wtr). We predict every type t for which yt > 0.5, or arg max yt if there is no such type. Multitask Objective The distant supervision sources provide partial supervision for ultra-fine types; KBs often provide more general types, while head words usually provide only ultra-fine types, without their generalizations. In other words, the absence of a type at a different level of abstraction does not imply a negative signal; e.g. when the head word is “inventor”, the model should not be discouraged to predict “person”. Prior work used a customized hinge loss (Abhishek et al., 2017) or max margin loss (Ren et al., 2016a) to improve robustness to noisy or incomplete supervision. We propose a multitask objective that reflects the characteristic of our training dataset. Instead of updating all labels for each example, we divide labels into three bins (general, fine, and ultra-fine), and update labels only in bin containing at least one positive label. Specifically, the training objective is to minimize J where t is the target vector at each granularity: Jall = Jgeneral · 1general(t) + Jfine · 1fine(t) + Jultra · 1ultra(t) Where 1category(t) is an indicator function that checks if t contains a type in the category, and 92 Model Dev Test MRR P R F1 MRR P R F1 AttentiveNER 0.221 53.7 15.0 23.5 0.223 54.2 15.2 23.7 Our Model 0.229 48.1 23.2 31.3 0.234 47.1 24.2 32.0 Table 3: Performance of our model and AttentiveNER (Shimaoka et al., 2017) on the new entity typing benchmark, using same training data. We show results for both development and test sets. Train Data Total General (1918) Fine (1289) Ultra-Fine (7594) MRR P R F1 P R F1 P R F1 P R F1 All 0.229 48.1 23.2 31.3 60.3 61.6 61.0 40.4 38.4 39.4 42.8 8.8 14.6 – Crowd 0.173 40.1 14.8 21.6 53.7 45.6 49.3 20.8 18.5 19.6 54.4 4.6 8.4 – Head 0.220 50.3 19.6 28.2 58.8 62.8 60.7 44.4 29.8 35.6 46.2 4.7 8.5 – EL 0.225 48.4 22.3 30.6 62.2 60.1 61.2 40.3 26.1 31.7 41.4 9.9 16.0 Table 4: Results on the development set for different type granularity and for different supervision data with our model. In each row, we remove a single source of supervision. Entity linking (EL) includes supervision from both KB and Wikipedia definitions. The numbers in the first row are example counts for each type granularity. Jcategory is the category-specific logistic regression objective: J = − X i ti · log(yi) + (1 −ti) · log(1 −yi) 5 Evaluation Experiment Setup The crowdsourced dataset (Section 2.1) was randomly split into train, development, and test sets, each with about 2,000 examples. We use this relatively small manuallyannotated training set (Crowd in Table 4) alongside the two distant supervision sources: entity linking (KB and Wikipedia definitions) and head words. To combine supervision sources of different magnitudes (2K crowdsourced data, 4.7M entity linking data, and 20M head words), we sample a batch of equal size from each source at each iteration. We reimplement the recent AttentiveNER model (Shimaoka et al., 2017) for reference.5 We report macro-averaged precision, recall, and F1, and the average mean reciprocal rank (MRR). Results Table 3 shows the performance of our model and our reimplementation of AttentiveNER. Our model, which uses a multitask objective to learn finer types without punishing more general types, shows recall gains at the cost of drop in precision. The MRR score shows that our 5We use the AttentiveNER model with no engineered features or hierarchical label encoding (as a hierarchy is not clear in our label setting) and let it predict from the same label space, training with the same supervision data. model is slightly better than the baseline at ranking correct types above incorrect ones. Table 4 shows the performance breakdown for different type granularity and different supervision. Overall, as seen in previous work on finegrained NER literature (Gillick et al., 2014; Ren et al., 2016a), finer labels were more challenging to predict than coarse grained labels, and this issue is exacerbated when dealing with ultra-fine types. All sources of supervision appear to be useful, with crowdsourced examples making the biggest impact. Head word supervision is particularly helpful for predicting ultra-fine labels, while entity linking improves fine label prediction. The low general type performance is partially because of nominal/pronoun mentions (e.g. “it”), and because of the large type inventory (sometimes “location” and “place” are annotated interchangeably). Analysis We manually analyzed 50 examples from the development set, four of which we present in Table 5. Overall, the model was able to generate accurate general types and a diverse set of type labels. Despite our efforts to annotate a comprehensive type set, the gold labels still miss many potentially correct labels (example (a): “man” is reasonable but counted as incorrect). This makes the precision estimates lower than the actual performance level, with about half the precision errors belonging to this category. Real precision errors include predicting co-hyponyms (example (b): “accident” instead of “attack”), and types that 93 Example Bruguera said {he} had problems with his left leg and had grown tired early during the match . (a) Annotation person, athlete, player, adult, male, contestant Prediction person, athlete, player, adult, male, contestant, defendant, man Example {The explosions} occurred on the night of October 7 , against the Hilton Taba and campsites used by Israelis in Ras al-Shitan. (b) Annotation event calamity, attack, disaster Prediction event, accident Example Similarly , Enterprise was considered for refit to replace Challenger after {the latter} was destroyed , but Endeavour was built from structural spares instead . (c) Annotation object, spacecraft, rocket, thing, vehicle, shuttle Prediction event Context “ There is a wealth of good news in this report , and I ’m particularly encouraged by the progress {we} are making against AIDS , ” HHS Secretary Donna Shalala said in a statement. (d) Annotation government, group, organization,hospital,administration,socialist Prediction government, group, person Table 5: Example and predictions from our best model on the development set. Entity mentions are marked with curly brackets, the correct predictions are boldfaced, and the missing labels are italicized and written in red. may be true, but are not supported by the context. We found that the model often abstained from predicting any fine-grained types. Especially in challenging cases as in example (c), the model predicts only general types, explaining the low recall numbers (28% of examples belong to this category). Even when the model generated correct fine-grained types as in example (d), the recall was often fairly low since it did not generate a complete set of related fine-grained labels. Estimating the performance of a model in an incomplete label setting and expanding label coverage are interesting areas for future work. Our task also poses a potential modeling challenge; sometimes, the model predicts two incongruous types (e.g. “location” and “person”), which points towards modeling the task as a joint set prediction task, rather than predicting labels individually. We provide sample outputs on the project website. 6 Improving Existing Fine-Grained NER with Better Distant Supervision We show that our model and distant supervision can improve performance on an existing finegrained NER task. We chose the widely-used OntoNotes (Gillick et al., 2014) dataset which includes nominal and named entity mentions.6 6While we were inspired by FIGER (Ling and Weld, 2012), the dataset presents technical difficulties. The test set has only 600 examples, and the development set was labeled with distant supervision, not manual annotation. We therefore focus our evaluation on OntoNotes. Augmenting the Training Data The original OntoNotes training set (ONTO in Tables 6 and 7) is extracted by linking entities to a KB. We supplement this dataset with our two new sources of distant supervision: Wikipedia definition sentences (WIKI) and head word supervision (HEAD) (see Section 3). To convert the label space, we manually map a single noun from our natural-language vocabulary to each formal-language type in the OntoNotes ontology. 77% of OntoNote’s types directly correspond to suitable noun labels (e.g. “doctor” to “/person/doctor”), whereas the other cases were mapped with minimal manual effort (e.g. “musician” to “person/artist/music”, “politician” to “/person/political figure”). We then expand these labels according to the ontology to include their hypernyms (“/person/political figure” will also generate “/person”). Lastly, we create negative examples by assigning the “/other” label to examples that are not mapped to the ontology. The augmented dataset contains 2.5M/0.6M new positive/negative examples, of which 0.9M/0.1M are from Wikipedia definition sentences and 1.6M/0.5M from head words. Experiment Setup We compare performance to other published results and to our reimplementation of AttentiveNER (Shimaoka et al., 2017). We also compare models trained with different sources of supervision. For this dataset, we did not use our multitask objective (Section 4), since expanding types to include their ontological hypernyms largely eliminates the partial supervision as94 Acc. Ma-F1 Mi-F1 AttentiveNER++ 51.7 70.9 64.9 AFET (Ren et al., 2016a) 55.1 71.1 64.7 LNR (Ren et al., 2016b) 57.2 71.5 66.1 Ours (ONTO+WIKI+HEAD) 59.5 76.8 71.8 Table 6: Results on the OntoNotes fine-grained entity typing test set. The first two models (AttentiveNER++ and AFET) use only KB-based supervision. LNR uses a filtered version of the KBbased training set. Our model uses all our distant supervision sources. Model Training Data Performance ONTO WIKI HEAD Acc. MaF1 MiF1 Attn.  46.5 63.3 58.3 NER    53.7 72.8 68.0  41.7 64.2 59.5   48.5 67.6 63.6 Ours   57.9 73.0 66.9   60.1 75.0 68.7    61.6 77.3 71.8 Table 7: Ablation study on the OntoNotes finegrained entity typing development. The second row isolates dataset improvements, while the third row isolates the model. sumption. Following prior work, we report macroand micro-averaged F1 score, as well as accuracy (exact set match). Results Table 6 shows the overall performance on the test set. Our combination of model and training data shows a clear improvement from prior work, setting a new state-of-the art result.7 In Table 7, we show an ablation study. Our new supervision sources improve the performance of both the AttentiveNER model and our own. We observe that every supervision source improves performance in its own right. Particularly, the naturally-occurring head-word supervision seems to be the prime source of improvement, increasing performance by about 10% across all metrics. Predicting Miscellaneous Types While analyzing the data, we observed that over half of the mentions in OntoNotes’ development set were annotated only with the miscellaneous type (“/other”). For both models in our evaluation, detecting the miscellaneous category is substantially easier than 7We did not compare to a system from (Yogatama et al., 2015), which reports slightly higher test number (72.98 micro F1) as they used a different, unreleased test set. producing real types (94% F1 vs. 58% F1 with our best model). We provide further details of this analysis in the supplementary material. 7 Related Work Fine-grained NER has received growing attention, and is used in many applications (Gupta et al., 2017; Ren et al., 2017; Yaghoobzadeh et al., 2017b; Raiman and Raiman, 2018). Researchers studied typing in varied contexts, including mentions in specific sentences (as we consider) (Ling and Weld, 2012; Gillick et al., 2014; Yogatama et al., 2015; Dong et al., 2015; Schutze et al., 2017), corpus-level prediction (Yaghoobzadeh and Sch¨utze, 2016), and lexicon level (given only a noun phrase with no context) (Yao et al., 2013). Recent work introduced fine-grained type ontologies (Rabinovich and Klein, 2017; Murty et al., 2017; Corro et al., 2015), defined using Wikipedia categories (100), Freebase types (1K) and WordNet senses (16K). However, they focus on named entities, and data has been challenging to gather, often approximating gold annotations with distant supervision. In contrast, (1) our ontology contains any frequent noun phrases that depicts a type, (2) our task goes beyond named entities, covering every noun phrase (even pronouns), and (3) we provide crowdsourced annotations which provide context-sensitive, fine grained type labels. Contextualized fine-grained entity typing is related to selectional preference (Resnik, 1996; Pantel et al., 2007; Zapirain et al., 2013; de Cruys, 2014), where the goal is to induce semantic generalizations on the type of arguments a predicate prefers. Rather than focusing on predicates, we condition on the entire sentence to deduce the arguments’ types, which allows us to capture more nuanced types. For example, not every type that fits “He played the violin in his room” is also suitable for “He played the violin in the Carnegie Hall”. Entity typing here can be connected to argument finding in semantic role labeling. To deal with noisy distant supervision for KB population and entity typing, researchers used multi-instance multi-label learning (Surdeanu et al., 2012; Yaghoobzadeh et al., 2017b) or custom losses (Abhishek et al., 2017; Ren et al., 2016a). Our multitask objective handles noisy supervision by pooling different distant supervision sources across different levels of granularity. 95 8 Conclusion Using virtually unrestricted types allows us to expand the standard KB-based training methodology with typing information from Wikipedia definitions and naturally-occurring head-word supervision. These new forms of distant supervision boost performance on our new dataset as well as on an existing fine-grained entity typing benchmark. These results set the first performance levels for our evaluation dataset, and suggest that the data will support significant future work. Acknowledgement The research was supported in part the ARO (W911NF-16-1-0121) the NSF (IIS-1252835, IIS1562364), and an Allen Distinguished Investigator Award. We would like to thank the reviewers for constructive feedback. Also thanks to Yotam Eshel and Noam Cohen for providing the Wikilink dataset. Special thanks to the members of UW NLP for helpful discussions and feedback. References Abhishek, Ashish Anand, and Amit Awekar. 2017. Fine-grained entity type classification by jointly learning representations and label embeddings. In Proceedings of European Chapter of Association for Computational Linguistics. Krisztian Balog and Robert Neumayer. 2012. Hierarchical target type identification for entity-oriented queries. In Proceedings of the Conference on Information and Knowledge Management. Luciano Del Corro, Abdalghani Abujabal, Rainer Gemulla, and Gerhard Weikum. 2015. Finet: Context-aware fine-grained named entity typing. In Proceedings of the conference on Empirical Methods in Natural Language Processing. Tim Van de Cruys. 2014. A neural network approach to selectional preference acquisition. In Proceedings of Empirical Methods in Natural Language Processing. Li Dong, Furu Wei, Hong Sun, Ming Zhou, and Ke Xu. 2015. A hybrid neural model for type classification of entity mentions. In Proceedings of International Joint Conference on Artificial Intelligence. Greg Durrett and Dan Klein. 2014. A joint model for entity analysis: Coreference, typing, and linking. In Transactions of the Association for Computational Linguistics. Daniel Gillick, Nevena Lazic, Kuzman Ganchev, Jesse Kirchner, and David Huynh. 2014. Contextdependent fine-grained entity type tagging. CoRR, abs/1412.1820. Nitish Gupta, Sameer Singh, and Dan Roth. 2017. Entity linking via joint encoding of types, descriptions, and context. In Proceedings of the Conference on Empirical Methods in Natural Language Processing, pages 2671–2680. Eduard Hovy, Mitchell Marcus, Martha Palmer, Lance Ramshaw, and Ralph Weischedel. 2006. Ontonotes: the 90% solution. In Proceedings of the human language technology conference of the North American Chapter of the Association for Computational Linguistics, Companion Volume: Short Papers, pages 57–60. Association for Computational Linguistics. Kenton Lee, Luheng He, Mike Lewis, and Luke Zettlemoyer. 2017. End-to-end neural coreference resolution. In Proceedings of the Conference on Empirical Methods in Natural Language Processing. Xiao Ling and Daniel S Weld. 2012. Fine-grained entity recognition. In Proceedings of Association for the Advancement of Artificial Intelligence. Citeseer. Christopher D. Manning, Mihai Surdeanu, John Bauer, Jenny Finkel, Steven J. Bethard, and David McClosky. 2014. The Stanford CoreNLP natural language processing toolkit. In Association for Computational Linguistics (ACL) System Demonstrations, pages 55–60. George A Miller. 1995. Wordnet: a lexical database for english. Communications of the ACM, 38(11):39– 41. Shikhar Murty, Patrick Verga, Luke Vilnis, and Andrew McCallum. 2017. Finer grained entity typing with typenet. In AKBC Workshop. Patrick Pantel, Rahul Bhagat, Bonaventura Coppola, Timothy Chklovski, and Eduard H. Hovy. 2007. Isp: Learning inferential selectional preferences. In Proceedings of North American Chapter of the Association for Computational Linguistics. Robert Parker, David Graff, David Kong, Ke Chen, and Kazuaki Maeda. 2011. English gigaword fifth edition (ldc2011t07). In Linguistic Data Consortium. Maxim Rabinovich and Dan Klein. 2017. Fine-grained entity typing with high-multiplicity assignments. In Proceedings of Association for Computational Linguistics (ACL). Jonathan Raiman and Olivier Raiman. 2018. Deeptype: Multilingual entity linking by neural type system evolution. In Association for the Advancement of Artificial Intelligence. Xiang Ren, Wenqi He, Meng Qu, Lifu Huang, Heng Ji, and Jiawei Han. 2016a. Afet: Automatic finegrained entity typing by hierarchical partial-label 96 embedding. In Proceedings Empirical Methods in Natural Language Processing. Xiang Ren, Wenqi He, Meng Qu, Clare R. Voss, Heng Ji, and Jiawei Han. 2016b. Label noise reduction in entity typing by heterogeneous partial-label embedding. In Proceedings of Knowledge Discovery and Data Mining. Xiang Ren, Zeqiu Wu, Wenqi He, Meng Qu, Clare R. Voss, Heng Ji, Tarek F. Abdelzaher, and Jiawei Han. 2017. Cotype: Joint extraction of typed entities and relations with knowledge bases. In Proceedings of World Wide Web Conference. Philip Resnik. 1996. Selectional constraints: an information-theoretic model and its computational realization. Cognition, 61 1-2:127–59. Alan Ritter, Sam Clark, Oren Etzioni, et al. 2011. Named entity recognition in tweets: an experimental study. In Proceedings of the Conference on Empirical Methods in Natural Language Processing, pages 1524–1534. Association for Computational Linguistics. Hinrich Schutze, Ulli Waltinger, and Sanjeev Karn. 2017. End-to-end trainable attentive decoder for hierarchical entity classification. In Proceedings of European Chapter of Association for Computational Linguistics. Sonse Shimaoka, Pontus Stenetorp, Kentaro Inui, and Sebastian Riedel. 2017. An attentive neural architecture for fine-grained entity type classification. In Proceedings of the European Chapter of Association for Computational Linguistics (ACL). Eyal Shnarch, Libby Barak, and Ido Dagan. Extracting lexical reference rules from wikipedia. In Proceedings of the Joint Conference of the 47th Annual Meeting of the ACL and the 4th International Joint Conference on Natural Language Processing of the AFNLP. Sameer Singh, Amarnag Subramanya, Fernando Pereira, and Andrew McCallum. 2012. Wikilinks: A large-scale cross-document coreference corpus labeled via links to Wikipedia. Technical Report UM-CS-2012-015, University of Massachusetts, Amherst. Mihai Surdeanu, Julie Tibshirani, Ramesh Nallapati, and Christopher D. Manning. 2012. Multiinstance multi-label learning for relation extraction. In EMNLP-CoNLL. Robert West, Evgeniy Gabrilovich, Kevin Murphy, Shaohua Sun, Rahul Gupta, and Dekang Lin. 2014. Knowledge base completion via search-based question answering. In Proceedings of World Wide Web Conference. Yadollah Yaghoobzadeh, Heike Adel, and Hinrich Sch¨utze. 2017a. Noise mitigation for neural entity typing and relation extraction. In Proceedings of the Conference of the European Chapter of the Association for Computational Linguistics, abs/1612.07495. Yadollah Yaghoobzadeh, Heike Adel, and Hinrich Sch¨utze. 2017b. Noise mitigation for neural entity typing and relation extraction. In Proceedings of European Chapter of Association for Computational Linguistics. Yadollah Yaghoobzadeh and Hinrich Sch¨utze. 2016. Corpus-level fine-grained entity typing using contextual information. Proceedings of the Conference on Empirical Methods in Natural Language Processing. Limin Yao, Sebastian Riedel, and Andrew McCallum. 2013. Universal schema for entity type prediction. In Automatic KnowledgeBase Construction Workshop at the Conference on Information and Knowledge Management. Semih Yavuz, Izzeddin Gur, Yu Su, Mudhakar Srivatsa, and Xifeng Yan. 2016. Improving semantic parsing via answer type inference. In Proceedings of Empirical Methods in Natural Language Processing. Dani Yogatama, Daniel Gillick, and Nevena Lazic. 2015. Embedding methods for fine grained entity type classification. In Proceedings of Association for Computational Linguistics (ACL). M Amir Yosef, Sandro Bauer, Johannes Hoffart, Marc Spaniol, and Gerhard Weikum. 2012. Hyena: Hierarchical type classification for entity names. In Proceedings of the International Conference on Computational Linguistics. Be˜nat Zapirain, Eneko Agirre, Llu´ıs M`arquez i Villodre, and Mihai Surdeanu. 2013. Selectional preferences for semantic role classification. Computational Linguistics, 39:631–663.
2018
9
Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Long Papers), pages 979–988 Melbourne, Australia, July 15 - 20, 2018. c⃝2018 Association for Computational Linguistics 979 Unpaired Sentiment-to-Sentiment Translation: A Cycled Reinforcement Learning Approach Jingjing Xu1∗, Xu Sun1∗, Qi Zeng1, Xuancheng Ren1, Xiaodong Zhang1, Houfeng Wang1, Wenjie Li2 1MOE Key Lab of Computational Linguistics, School of EECS, Peking University 2Department of Computing, Hong Kong Polytechnic University {jingjingxu,xusun,pkuzengqi,renxc,zxdcs,wanghf}@pku.edu.cn [email protected] Abstract The goal of sentiment-to-sentiment “translation” is to change the underlying sentiment of a sentence while keeping its content. The main challenge is the lack of parallel data. To solve this problem, we propose a cycled reinforcement learning method that enables training on unpaired data by collaboration between a neutralization module and an emotionalization module. We evaluate our approach on two review datasets, Yelp and Amazon. Experimental results show that our approach significantly outperforms the state-of-the-art systems. Especially, the proposed method substantially improves the content preservation performance. The BLEU score is improved from 1.64 to 22.46 and from 0.56 to 14.06 on the two datasets, respectively.1 1 Introduction Sentiment-to-sentiment “translation” requires the system to change the underlying sentiment of a sentence while preserving its non-emotional semantic content as much as possible. It can be regarded as a special style transfer task that is important in Natural Language Processing (NLP) (Hu et al., 2017; Shen et al., 2017; Fu et al., 2018). It has broad applications, including review sentiment transformation, news rewriting, etc. Yet the lack of parallel training data poses a great obstacle to a satisfactory performance. Recently, several related studies for language style transfer (Hu et al., 2017; Shen et al., 2017) have been proposed. However, when applied ∗Equal Contribution. 1The released code can be found in https://github.com/lancopku/unpaired-sentiment-translation to the sentiment-to-sentiment “translation” task, most existing studies only change the underlying sentiment and fail in keeping the semantic content. For example, given “The food is delicious” as the source input, the model generates “What a bad movie” as the output. Although the sentiment is successfully transformed from positive to negative, the output text focuses on a different topic. The reason is that these methods attempt to implicitly separate the emotional information from the semantic information in the same dense hidden vector, where all information is mixed together in an uninterpretable way. Due to the lack of supervised parallel data, it is hard to only modify the underlying sentiment without any loss of the nonemotional semantic information. To tackle the problem of lacking parallel data, we propose a cycled reinforcement learning approach that contains two parts: a neutralization module and an emotionalization module. The neutralization module is responsible for extracting non-emotional semantic information by explicitly filtering out emotional words. The advantage is that only emotional words are removed, which does not affect the preservation of non-emotional words. The emotionalization module is responsible for adding sentiment to the neutralized semantic content for sentiment-to-sentiment translation. In cycled training, given an emotional sentence with sentiment s, we first neutralize it to the nonemotional semantic content, and then force the emotionalization module to reconstruct the original sentence by adding the sentiment s. Therefore, the emotionalization module is taught to add sentiment to the semantic context in a supervised way. By adding opposite sentiment, we can achieve the goal of sentiment-to-sentiment translation. Because of the discrete choice of neutral words, the gradient is no longer differentiable over the neutralization module. Thus, we use policy gradient, 980 one of the reinforcement learning methods, to reward the output of the neutralization module based on the feedback from the emotionalization module. We add different sentiment to the semantic content and use the quality of the generated text as reward. The quality is evaluated by two useful metrics: one for identifying whether the generated text matches the target sentiment; one for evaluating the content preservation performance. The reward guides the neutralization module to better identify non-emotional words. In return, the improved neutralization module further enhances the emotionalization module. Our contributions are concluded as follows: • For sentiment-to-sentiment translation, we propose a cycled reinforcement learning approach. It enables training with unpaired data, in which only reviews and sentiment labels are available. • Our approach tackles the bottleneck of keeping semantic information by explicitly separating sentiment information from semantic content. • Experimental results show that our approach significantly outperforms the state-of-the-art systems, especially in content preservation. 2 Related Work Style transfer in computer vision has been studied (Johnson et al., 2016; Gatys et al., 2016; Liao et al., 2017; Li et al., 2017; Zhu et al., 2017). The main idea is to learn the mapping between two image domains by capturing shared representations or correspondences of higher-level structures. There have been some studies on unpaired language style transfer recently. Hu et al. (2017) propose a new neural generative model that combines variational auto-encoders (VAEs) and holistic attribute discriminators for effective imposition of style semantic structures. Fu et al. (2018) propose to use an adversarial network to make sure that the input content does not have style information. Shen et al. (2017) focus on separating the underlying content from style information. They learn an encoder that maps the original sentence to style-independent content and a style-dependent decoder for rendering. However, their evaluations only consider the transferred style accuracy. We argue that content preservation is also an indispensable evaluation metric. However, when applied to the sentiment-to-sentiment translation task, the previously mentioned models share the same problem. They have the poor preservation of non-emotional semantic content. In this paper, we propose a cycled reinforcement learning method to improve sentiment-tosentiment translation in the absence of parallel data. The key idea is to build supervised training pairs by reconstructing the original sentence. A related study is “back reconstruction” in machine translation (He et al., 2016; Tu et al., 2017). They couple two inverse tasks: one is for translating a sentence in language A to a sentence in language B; the other is for translating a sentence in language B to a sentence in language A. Different from the previous work, we do not introduce the inverse task, but use collaboration between the neutralization module and the emotionalization module. Sentiment analysis is also related to our work (Socher et al., 2011; Pontiki et al., 2015; Rosenthal et al., 2017; Chen et al., 2017; Ma et al., 2017, 2018b). The task usually involves detecting whether a piece of text expresses positive, negative, or neutral sentiment. The sentiment can be general or about a specific topic. 3 Cycled Reinforcement Learning for Unpaired Sentiment-to-Sentiment Translation In this section, we introduce our proposed method. An overview is presented in Section 3.1. The details of the neutralization module and the emotionalization module are shown in Section 3.2 and Section 3.3. The cycled reinforcement learning mechanism is introduced in Section 3.4. 3.1 Overview The proposed approach contains two modules: a neutralization module and an emotionalization module, as shown in Figure 1. The neutralization module first extracts non-emotional semantic content, and then the emotionalization module attaches sentiment to the semantic content. Two modules are trained by the proposed cycled reinforcement learning method. The proposed method requires the two modules to have initial learning ability. Therefore, we propose a novel pre-training method, which uses a self-attention based sentiment classifier (SASC). A sketch of cycled reinforcement learning is shown in Algorithm 1. The 981 Neutralization Module Emotionalization Module The food is very * The food is very delicious Classifier Negative terrible and is tasteless The food very delicious is The food Positive Figure 1: An illustration of the two modules. Lower: The neutralization module removes emotional words and extracts non-emotional semantic information. Upper: The emotionalization module adds sentiment to the semantic content. The proposed self-attention based sentiment classifier is used to guide the pre-training. details are introduced as follows. 3.2 Neutralization Module The neutralization module Nθ is used for explicitly filtering out emotional information. In this paper, we consider this process as an extraction problem. The neutralization module first identifies non-emotional words and then feeds them into the emotionalization module. We use a single Longshort Term Memory Network (LSTM) to generate the probability of being neutral or being polar for every word in a sentence. Given an emotional input sequence x = (x1, x2, . . . , xT ) of T words from Γ, the vocabulary of words, this module is responsible for producing a neutralized sequence. Since cycled reinforcement learning requires the modules with initial learning ability, we propose a novel pre-training method to teach the neutralization module to identify non-emotional words. We construct a self-attention based sentiment classifier and use the learned attention weight as the supervisory signal. The motivation comes from the fact that, in a well-trained sentiment classification model, the attention weight reflects the sentiment contribution of each word to Algorithm 1 The cycled reinforcement learning method for training the neutralization module Nθ and the emotionalization module Eφ. 1: Initialize the neutralization module Nθ, the emotionalization module Eφ with random weights θ, φ 2: Pre-train Nθ using MLE based on Eq. 6 3: Pre-train Eφ using MLE based on Eq. 7 4: for each iteration i = 1, 2, ..., M do 5: Sample a sequence x with sentiment s from X 6: Generate a neutralized sequence ˆx based on Nθ 7: Given ˆx and s, generate an output based on Eφ 8: Compute the gradient of Eφ based on Eq. 8 9: Compute the reward R1 based on Eq. 11 10: ¯s = the opposite sentiment 11: Given ˆx and ¯s, generate an output based on Eφ 12: Compute the reward R2 based on Eq. 11 13: Compute the combined reward Rc based on Eq. 10 14: Compute the gradient of Nθ based on Eq. 9 15: Update model parameters θ, φ 16: end for some extent. Emotional words tend to get higher attention weights while neutral words usually get lower weights. The details of sentiment classifier are described as follows. Given an input sequence x, a sentiment label y is produced as y = softmax(W · c) (1) where W is a parameter. The term c is computed as a weighted sum of hidden vectors: c = T X i=0 αihi (2) where αi is the weight of hi. The term hi is the output of LSTM at the i-th word. The term αi is computed as αi = exp(ei) PT i=0 exp(ei) (3) where ei = f(hi, hT ) is an alignment model. We consider the last hidden state hT as the context vector, which contains all information of an input sequence. The term ei evaluates the contribution of each word for sentiment classification. Our experimental results show that the proposed sentiment classifier achieves the accuracy of 89% and 90% on two datasets. With high classification accuracy, the attention weight produced by the classifier is considered to adequately capture the sentiment information of each word. To extract non-emotional words based on continuous attention weights, we map attention 982 weights to discrete values, 0 and 1. Since the discrete method is not the key part is this paper, we only use the following method for simplification. We first calculate the averaged attention value in a sentence as ¯α = 1 T T X i=0 αi (4) where ¯α is used as the threshold to distinguish non-emotional words from emotional words. The discrete attention weight is calculated as ˆαi = ( 1, if αi ≤¯α 0, if αi > ¯α (5) where ˆαi is treated as the identifier. For pre-training the neutralization module, we build the training pair of input text x and a discrete attention weight sequence ˆα. The cross entropy loss is computed as Lθ = − T X i=1 PNθ( ˆαi|xi) (6) 3.3 Emotionalization Module The emotionalization module Eφ is responsible for adding sentiment to the neutralized semantic content. In our work, we use a bi-decoder based encoder-decoder framework, which contains one encoder and two decoders. One decoder adds the positive sentiment and the other adds the negative sentiment. The input sentiment signal determines which decoder to use. Specifically, we use the seq2seq model (Sutskever et al., 2014) for implementation. Both the encoder and decoder are LSTM networks. The encoder learns to compress the semantic content into a dense vector. The decoder learns to add sentiment based on the dense vector. Given the neutralized semantic content and the target sentiment, this module is responsible for producing an emotional sequence. For pre-training the emotionalization module, we first generate a neutralized input sequence ˆx by removing emotional words identified by the proposed sentiment classifier. Given the training pair of a neutralized sequence ˆx and an original sentence x with sentiment s, the cross entropy loss is computed as Lφ = − T X i=1 PEφ(xi|ˆxi, s) (7) where a positive example goes through the positive decoder and a negative example goes through the negative decoder. We also explore a simpler method for pretraining the emotionalization module, which uses the product between a continuous vector 1 −α and a word embedding sequence as the neutralized content where α represents an attention weight sequence. Experimental results show that this method achieves much lower results than explicitly removing emotional words based on discrete attention weights. Thus, we do not choose this method in our work. 3.4 Cycled Reinforcement Learning Two modules are trained by the proposed cycled method. The neutralization module first neutralizes an emotional input to semantic content and then the emotionalization module is forced to reconstruct the original sentence based on the source sentiment and the semantic content. Therefore, the emotionalization module is taught to add sentiment to the semantic content in a supervised way. Because of the discrete choice of neutral words, the loss is no longer differentiable over the neutralization module. Therefore, we formulate it as a reinforcement learning problem and use policy gradient to train the neutralization module. The detailed training process is shown as follows. We refer the neutralization module Nθ as the first agent and the emotionalization module Eφ as the second one. Given a sentence x associated with sentiment s, the term ˆx represents the middle neutralized context extracted by ˆα, which is generated by PNθ(ˆα|x). In cycled training, the original sentence can be viewed as the supervision for training the second agent. Thus, the gradient for the second agent is ∇φJ(φ) = ∇φ log(PEφ(x|ˆx, s)) (8) We denote ¯x as the output generated by PEφ(¯x|ˆx, s). We also denote y as the output generated by PEφ(y|ˆx, ¯s) where ¯s represents the opposite sentiment. Given ¯x and y, we first calculate rewards for training the neutralized module, R1 and R2. The details of calculation process will be introduced in Section 3.4.1. Then, we optimize parameters through policy gradient by maximizing the expected reward to train the neutralization module. It guides the neutralization module to identify non-emotional words better. In return, the 983 improved neutralization module further enhances the emotionalization module. According to the policy gradient theorem (Williams, 1992), the gradient for the first agent is ∇θJ(θ) = E[Rc · ∇θ log(PNθ(ˆα|x))] (9) where Rc is calculated as Rc = R1 + R2 (10) Based on Eq. 8 and Eq. 9, we use the sampling approach to estimate the expected reward. This cycled process is repeated until converge. 3.4.1 Reward The reward consists of two parts, sentiment confidence and BLEU. Sentiment confidence evaluates whether the generated text matches the target sentiment. We use a pre-trained classifier to make the judgment. Specially, we use the proposed selfattention based sentiment classifier for implementation. The BLEU (Papineni et al., 2002) score is used to measure the content preservation performance. Considering that the reward should encourage the model to improve both metrics, we use the harmonic mean of sentiment confidence and BLEU as reward, which is formulated as R = (1 + β2) 2 · BLEU · Confid (β2 · BLEU) + Confid (11) where β is a harmonic weight. 4 Experiment In this section, we evaluate our method on two review datasets. We first introduce the datasets, the training details, the baselines, and the evaluation metrics. Then, we compare our approach with the state-of-the-art systems. Finally, we show the experimental results and provide the detailed analysis of the key components. 4.1 Unpaired Datasets We conduct experiments on two review datasets that contain user ratings associated with each review. Following previous work (Shen et al., 2017), we consider reviews with rating above three as positive reviews and reviews below three as negative reviews. The positive and negative reviews are not paired. Since our approach focuses on sentence-level sentiment-to-sentiment translation where sentiment annotations are provided at the document level, we process the two datasets with the following steps. First, following previous work (Shen et al., 2017), we filter out the reviews that exceed 20 words. Second, we construct textsentiment pairs by extracting the first sentence in a review associated with its sentiment label, because the first sentence usually expresses the core idea. Finally, we train a sentiment classifier and filter out the text-sentiment pairs with the classifier confidence below 0.8. Specially, we use the proposed self-attention based sentiment classifier for implementation. The details of the processed datasets are introduced as follows. Yelp Review Dataset (Yelp): This dataset is provided by Yelp Dataset Challenge.2 The processed Yelp dataset contains 400K, 10K, and 3K pairs for training, validation, and testing, respectively. Amazon Food Review Dataset (Amazon): This dataset is provided by McAuley and Leskovec (2013). It consists of amounts of food reviews from Amazon.3 The processed Amazon dataset contains 230K, 10K, and 3K pairs for training, validation, and testing, respectively. 4.2 Training Details We tune hyper-parameters based on the performance on the validation sets. The self-attention based sentiment classifier is trained for 10 epochs on two datasets. We set β for calculating reward to 0.5, hidden size to 256, embedding size to 128, vocabulary size to 50K, learning rate to 0.6, and batch size to 64. We use the Adagrad (Duchi et al., 2011) optimizer. All of the gradients are clipped when the norm exceeds 2. Before cycled training, the neutralization module and the emotionalization module are pre-trained for 1 and 4 epochs on the yelp dataset, for 3 and 5 epochs on the Amazon dataset. 4.3 Baselines We compare our proposed method with the following state-of-the-art systems. Cross-Alignment Auto-Encoder (CAAE): This method is proposed by Shen et al. (2017). They propose a method that uses refined alignment of latent representations in hidden layers to 2https://www.yelp.com/dataset/ challenge 3http://amazon.com 984 perform style transfer. We treat this model as a baseline and adapt it by using the released code. Multi-Decoder with Adversarial Learning (MDAL): This method is proposed by Fu et al. (2018). They use a multi-decoder model with adversarial learning to separate style representations and content representations in hidden layers. We adapt this model by using the released code. 4.4 Evaluation Metrics We conduct two evaluations in this work, including an automatic evaluation and a human evaluation. The details of evaluation metrics are shown as follows. 4.4.1 Automatic Evaluation We quantitatively measure sentiment transformation by evaluating the accuracy of generating designated sentiment. For a fair comparison, we do not use the proposed sentiment classification model. Following previous work (Shen et al., 2017; Hu et al., 2017), we instead use a stateof-the-art sentiment classifier (Vieira and Moura, 2017), called TextCNN, to automatically evaluate the transferred sentiment accuracy. TextCNN achieves the accuracy of 89% and 88% on two datasets. Specifically, we generate sentences given sentiment s, and use the pre-trained sentiment classifier to assign sentiment labels to the generated sentences. The accuracy is calculated as the percentage of the predictions that match the sentiment s. To evaluate the content preservation performance, we use the BLEU score (Papineni et al., 2002) between the transferred sentence and the source sentence as an evaluation metric. BLEU is a widely used metric for text generation tasks, such as machine translation, summarization, etc. The metric compares the automatically produced text with the reference text by computing overlapping lexical n-gram units. To evaluate the overall performance, we use the geometric mean of ACC and BLEU as an evaluation metric. The G-score is one of the most commonly used “single number” measures in Information Retrieval, Natural Language Processing, and Machine Learning. 4.4.2 Human Evaluation While the quantitative evaluation provides indication of sentiment transfer quality, it can not evaluate the quality of transferred text accurately. Yelp ACC BLEU G-score CAAE (Shen et al., 2017) 93.22 1.17 10.44 MDAL (Fu et al., 2018) 85.65 1.64 11.85 Proposed Method 80.00 22.46 42.38 Amazon ACC BLEU G-score CAAE (Shen et al., 2017) 84.19 0.56 6.87 MDAL (Fu et al., 2018) 70.50 0.27 4.36 Proposed Method 70.37 14.06 31.45 Table 1: Automatic evaluations of the proposed method and baselines. ACC evaluates sentiment transformation. BLEU evaluates content preservation. G-score is the geometric mean of ACC and BLEU. Therefore, we also perform a human evaluation on the test set. We randomly choose 200 items for the human evaluation. Each item contains the transformed sentences generated by different systems given the same source sentence. The items are distributed to annotators who have no knowledge about which system the sentence is from. They are asked to score the transformed text in terms of sentiment and semantic similarity. Sentiment represents whether the sentiment of the source text is transferred correctly. Semantic similarity evaluates the context preservation performance. The score ranges from 1 to 10 (1 is very bad and 10 is very good). 4.5 Experimental Results Automatic evaluation results are shown in Table 1. ACC evaluates sentiment transformation. BLEU evaluates semantic content preservation. G-score represents the geometric mean of ACC and BLEU. CAAE and MDAL achieve much lower BLEU scores, 1.17 and 1.64 on the Yelp dataset, 0.56 and 0.27 on the Amazon dataset. The low BLEU scores indicate the worrying content preservation performance to some extent. Even with the desired sentiment, the irrelevant generated text leads to worse overall performance. In general, these two systems work more like sentiment-aware language models that generate text only based on the target sentiment and neglect the source input. The main reason is that these two systems attempt to separate emotional information from non-emotional content in a hidden layer, where all information is complicatedly mixed together. It is difficult to only modify emotional information without any loss of non-emotional semantic content. In comparison, our proposed method achieves the best overall performance on the two datasets, 985 Yelp Sentiment Semantic G-score CAAE (Shen et al., 2017) 7.67 3.87 5.45 MDAL (Fu et al., 2018) 7.12 3.68 5.12 Proposed Method 6.99 5.08 5.96 Amazon Sentiment Semantic G-score CAAE (Shen et al., 2017) 8.61 3.15 5.21 MDAL (Fu et al., 2018) 7.93 3.22 5.05 Proposed Method 7.92 4.67 6.08 Table 2: Human evaluations of the proposed method and baselines. Sentiment evaluates sentiment transformation. Semantic evaluates content preservation. demonstrating the ability of learning knowledge from unpaired data. This result is attributed to the improved BLEU score. The BLEU score is largely improved from 1.64 to 22.46 and from 0.56 to 14.06 on the two datasets. The score improvements mainly come from the fact that we separate emotional information from semantic content by explicitly filtering out emotional words. The extracted content is preserved and fed into the emotionalization module. Given the overall quality of transferred text as the reward, the neutralization module is taught to extract non-emotional semantic content better. Table 2 shows the human evaluation results. It can be clearly seen that the proposed method obviously improves semantic preservation. The semantic score is increased from 3.87 to 5.08 on the Yelp dataset, and from 3.22 to 4.67 on the Amazon dataset. In general, our proposed model achieves the best overall performance. Furthermore, it also needs to be noticed that with the large improvement in content preservation, the sentiment accuracy of the proposed method is lower than that of CAAE on the two datasets. It shows that simultaneously promoting sentiment transformation and content preservation remains to be studied further. By comparing two evaluation results, we find that there is an agreement between the human evaluation and the automatic evaluation. It indicates the usefulness of automatic evaluation metrics. However, we also notice that the human evaluation has a smaller performance gap between the baselines and the proposed method than the automatic evaluation. It shows the limitation of automatic metrics for giving accurate results. For evaluating sentiment transformation, even with a high accuracy, the sentiment classifier sometimes generates noisy results, especially for those neutral sentences (e.g., “I ate a cheese sandwich”). For evaluating content preservation, the BLEU score Input: I would strongly advise against using this company. CAAE: I love this place for a great experience here. MDAL: I have been a great place was great. Proposed Method: I would love using this company. Input: The service was nearly non-existent and extremely rude. CAAE: The best place in the best area in vegas. MDAL: The food is very friendly and very good. Proposed Method: The service was served and completely fresh. Input: Asked for the roast beef and mushroom sub, only received roast beef. CAAE: We had a great experience with. MDAL: This place for a great place for a great food and best. Proposed Method: Thanks for the beef and spring bbq. Input: Worst cleaning job ever! CAAE: Great food and great service! MDAL: Great food, food! Proposed Method: Excellent outstanding job ever! Input: Most boring show I’ve ever been. CAAE: Great place is the best place in town. MDAL: Great place I’ve ever ever had. Proposed Method: Most amazing show I’ve ever been. Input: Place is very clean and the food is delicious. CAAE: Don’t go to this place. MDAL: This place wasn’t worth the worst place is horrible. Proposed Method: Place is very small and the food is terrible. Input: Really satisfied with experience buying clothes. CAAE: Don’t go to this place. MDAL: Do not impressed with this place. Proposed Method: Really bad experience. Table 3: Examples generated by the proposed approach and baselines on the Yelp dataset. The two baselines change not only the polarity of examples, but also the semantic content. In comparison, our approach changes the sentiment of sentences with higher semantic similarity. is computed based on the percentage of overlapping n-grams between the generated text and the reference text. However, the overlapping n-grams contain not only content words but also function words, bringing the noisy results. In general, accurate automatic evaluation metrics are expected in future work. Table 3 presents the examples generated by different systems on the Yelp dataset. The two baselines change not only the polarity of examples, but also the semantic content. In comparison, our method precisely changes the sentiment of sentences (and paraphrases slightly to ensure fluency), while keeping the semantic content unchanged. 986 Yelp ACC BLEU G-score Emotionalization Module 41.84 25.66 32.77 + NM + Cycled RL 85.71 1.08 9.62 + NM + Pre-training 70.61 17.02 34.66 + NM + Cycled RL + Pre-training 80.00 22.46 42.38 Amazon ACC BLEU G-score Emotionalization Module 57.28 12.22 26.46 + NM + Cycled RL 64.16 8.03 22.69 + NM + Pre-training 69.61 11.16 27.87 + NM + Cycled RL + Pre-training 70.37 14.06 31.45 Table 4: Performance of key components in the proposed approach. “NM” denotes the neutralization module. “Cycled RL” represents cycled reinforcement learning. 4.6 Incremental Analysis In this section, we conduct a series of experiments to evaluate the contributions of our key components. The results are shown in Table 4. We treat the emotionalization module as a baseline where the input is the original emotional sentence. The emotionalization module achieves the highest BLEU score but with much lower sentiment transformation accuracy. The encoding of the original sentiment leads to the emotional hidden vector that influences the decoding process and results in worse sentiment transformation performance. It can be seen that the method with all components achieves the best performance. First, we find that the method that only uses cycled reinforcement learning performs badly because it is hard to guide two randomly initialized modules to teach each other. Second, the pre-training method brings a slight improvement in overall performance. The G-score is improved from 32.77 to 34.66 and from 26.46 to 27.87 on the two datasets. The bottleneck of this method is the noisy attention weight because of the limited sentiment classification accuracy. Third, the method that combines cycled reinforcement learning and pre-training achieves the better performance than using one of them. Pre-training gives the two modules initial learning ability. Cycled training teaches the two modules to improve each other based on the feedback signals. Specially, the G-score is improved from 34.66 to 42.38 and from 27.87 to 31.45 on the two datasets. Finally, by comparing the methods with and without the neutralization module, we find that the neutralization mechanism improves a lot in sentiment transformation with a slight reduction on content preservation. It proves the effectiveness of explicMichael is absolutely wonderful. I would strongly advise against using this company. Horrible experience! Worst cleaning job ever! Most boring show i ’ve ever been. Hainan chicken was really good. I really don’t understand all the negative reviews for this dentist. Smells so weird in there. The service was nearly non-existent and extremely rude. Table 5: Analysis of the neutralization module. Words in red are removed by the neutralization module. itly separating sentiment information from semantic content. Furthermore, to analyze the neutralization ability in the proposed method, we randomly sample several examples, as shown in Table 5. It can be clearly seen that emotional words are removed accurately almost without loss of non-emotional information. 4.7 Error Analysis Although the proposed method outperforms the state-of-the-art systems, we also observe several failure cases, such as sentiment-conflicted sentences (e.g., “Outstanding and bad service”), neutral sentences (e.g., “Our first time here”). Sentiment-conflicted sentences indicate that the original sentiment is not removed completely. This problem occurs when the input contains emotional words that are unseen in the training data, or the sentiment is implicitly expressed. Handling complex sentiment expressions is an important problem for future work. Neutral sentences demonstrate that the decoder sometimes fails in adding the target sentiment and only generates text based on the semantic content. A better sentimentaware decoder is expected to be explored in future work. 5 Conclusions and Future Work In this paper, we focus on unpaired sentimentto-sentiment translation and propose a cycled reinforcement learning approach that enables training in the absence of parallel training data. We conduct experiments on two review datasets. Experimental results show that our method substantially outperforms the state-of-the-art systems, especially in terms of semantic preservation. For future work, we would like to explore a fine-grained version of sentiment-to-sentiment translation that 987 not only reverses sentiment, but also changes the strength of sentiment. Acknowledgements This work was supported in part by National Natural Science Foundation of China (No. 61673028), National High Technology Research and Development Program of China (863 Program, No. 2015AA015404), and the National Thousand Young Talents Program. Xu Sun is the corresponding author of this paper. References Tao Chen, Ruifeng Xu, Yulan He, and Xuan Wang. 2017. Improving sentiment analysis via sentence type classification using bilstm-crf and CNN. Expert Syst. Appl., 72:221–230. Li Dong, Shaohan Huang, Furu Wei, Mirella Lapata, Ming Zhou, and Ke Xu. 2017. Learning to generate product reviews from attributes. In Proceedings of the 15th Conference of the European Chapter of the Association for Computational Linguistics: Volume 1, Long Papers, volume 1, pages 623–632. John C. Duchi, Elad Hazan, and Yoram Singer. 2011. Adaptive subgradient methods for online learning and stochastic optimization. Journal of Machine Learning Research, 12:2121–2159. Zhenxin Fu, Xiaoye Tan, Nanyun Peng, Dongyan Zhao, and Rui Yan. 2018. Style transfer in text: Exploration and evaluation. In AAAI 2018. Leon A. Gatys, Alexander S. Ecker, and Matthias Bethge. 2016. Image style transfer using convolutional neural networks. In 2016 IEEE Conference on Computer Vision and Pattern Recognition, CVPR 2016, Las Vegas, NV, USA, June 27-30, 2016, pages 2414–2423. Di He, Yingce Xia, Tao Qin, Liwei Wang, Nenghai Yu, Tie-Yan Liu, and Wei-Ying Ma. 2016. Dual learning for machine translation. In Advances in Neural Information Processing Systems 29: Annual Conference on Neural Information Processing Systems 2016, December 5-10, 2016, Barcelona, Spain, pages 820–828. Zhiting Hu, Zichao Yang, Xiaodan Liang, Ruslan Salakhutdinov, and Eric P. Xing. 2017. Controllable text generation. In ICML 2017. Justin Johnson, Alexandre Alahi, and Li Fei-Fei. 2016. Perceptual losses for real-time style transfer and super-resolution. In ECCV, 2016, pages 694–711. Yanghao Li, Naiyan Wang, Jiaying Liu, and Xiaodi Hou. 2017. Demystifying neural style transfer. In Proceedings of the Twenty-Sixth International Joint Conference on Artificial Intelligence, IJCAI 2017, Melbourne, Australia, August 19-25, 2017, pages 2230–2236. Jing Liao, Yuan Yao, Lu Yuan, Gang Hua, and Sing Bing Kang. 2017. Visual attribute transfer through deep image analogy. ACM Trans. Graph., 36(4):120:1–120:15. Junyang Lin, Shuming Ma, Qi Su, and Xu Sun. 2018. Decoding-history-based adaptive control of attention for neural machine translation. CoRR, abs/1802.01812. Dehong Ma, Sujian Li, Xiaodong Zhang, Houfeng Wang, and Xu Sun. 2017. Cascading multiway attentions for document-level sentiment classification. In Proceedings of the Eighth International Joint Conference on Natural Language Processing, IJCNLP 2017, Taipei, Taiwan, November 27 - December 1, 2017 - Volume 1: Long Papers, pages 634–643. Shuming Ma, Xu Sun, Wei Li, Sujian Li, Wenjie Li, and Xuancheng Ren. 2018a. Query and output: Generating words by querying distributed word representations for paraphrase generation. CoRR, abs/1803.01465. Shuming Ma, Xu Sun, Junyang Lin, and Xuancheng Ren. 2018b. A hierarchical end-to-end model for jointly improving text summarization and sentiment classification. CoRR, abs/1805.01089. Julian John McAuley and Jure Leskovec. 2013. From amateurs to connoisseurs: modeling the evolution of user expertise through online reviews. In 22nd International World Wide Web Conference, WWW ’13, Rio de Janeiro, Brazil, May 13-17, 2013, pages 897– 908. Kishore Papineni, Salim Roukos, Todd Ward, and WeiJing Zhu. 2002. Bleu: a method for automatic evaluation of machine translation. In Proceedings of the 40th Annual Meeting of the Association for Computational Linguistics, July 6-12, 2002, Philadelphia, PA, USA., pages 311–318. Maria Pontiki, Dimitris Galanis, Haris Papageorgiou, Suresh Manandhar, and Ion Androutsopoulos. 2015. Semeval-2015 task 12: Aspect based sentiment analysis. In Proceedings of the 9th International Workshop on Semantic Evaluation, SemEval@NAACLHLT 2015, Denver, Colorado, USA, June 4-5, 2015, pages 486–495. Sara Rosenthal, Noura Farra, and Preslav Nakov. 2017. Semeval-2017 task 4: Sentiment analysis in twitter. In Proceedings of the 11th International Workshop on Semantic Evaluation, SemEval@ACL 2017, Vancouver, Canada, August 3-4, 2017, pages 502–518. Tianxiao Shen, Tao Lei, Regina Barzilay, and Tommi S. Jaakkola. 2017. Style transfer from non-parallel text by cross-alignment. In NIPS 2017. 988 Richard Socher, Cliff Chiung-Yu Lin, Andrew Y. Ng, and Christopher D. Manning. 2011. Parsing natural scenes and natural language with recursive neural networks. In Proceedings of the 28th International Conference on Machine Learning, ICML 2011, Bellevue, Washington, USA, June 28 - July 2, 2011, pages 129–136. Ilya Sutskever, Oriol Vinyals, and Quoc V. Le. 2014. Sequence to sequence learning with neural networks. In Advances in Neural Information Processing Systems 27: Annual Conference on Neural Information Processing Systems 2014, December 813 2014, Montreal, Quebec, Canada, pages 3104– 3112. Zhaopeng Tu, Yang Liu, Lifeng Shang, Xiaohua Liu, and Hang Li. 2017. Neural machine translation with reconstruction. In Proceedings of the Thirty-First AAAI Conference on Artificial Intelligence, February 4-9, 2017, San Francisco, California, USA., pages 3097–3103. Joao Paulo Albuquerque Vieira and Raimundo Santos Moura. 2017. An analysis of convolutional neural networks for sentence classification. In XLIII 2017, pages 1–5. Ronald J. Williams. 1992. Simple statistical gradientfollowing algorithms for connectionist reinforcement learning. Machine Learning, 8:229–256. Jingjing Xu, Xu Sun, Xuancheng Ren, Junyang Lin, Bingzhen Wei, and Wei Li. 2018. DP-GAN: diversity-promoting generative adversarial network for generating informative and diversified text. CoRR, abs/1802.01345. Hongyu Zang and Xiaojun Wan. 2017. Towards automatic generation of product reviews from aspectsentiment scores. In Proceedings of the 10th International Conference on Natural Language Generation, pages 168–177. Zhiyuan Zhang, Wei Li, and Xu Sun. 2018. Automatic transferring between ancient chinese and contemporary chinese. CoRR, abs/1803.01557. Jun-Yan Zhu, Taesung Park, Phillip Isola, and Alexei A. Efros. 2017. Unpaired image-to-image translation using cycle-consistent adversarial networks. In IEEE International Conference on Computer Vision, ICCV 2017, Venice, Italy, October 2229, 2017, pages 2242–2251.
2018
90
Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Long Papers), pages 989–999 Melbourne, Australia, July 15 - 20, 2018. c⃝2018 Association for Computational Linguistics 989 Discourse Marker Augmented Network with Reinforcement Learning for Natural Language Inference Boyuan Pan†, Yazheng Yang‡, Zhou Zhao‡, Yueting Zhuang‡, Deng Cai†♯∗, Xiaofei He⋆† †State Key Lab of CAD&CG, Zhejiang University, Hangzhou, China ‡College of Computer Science, Zhejiang University, Hangzhou, China ♯Alibaba-Zhejiang University Joint Institute of Frontier Technologies ⋆Fabu Inc., Hangzhou, China {panby, yazheng yang, zhaozhou, yzhuang}@zju.edu.cn {dengcai, xiaofeihe}@cad.zju.edu.com Abstract Natural Language Inference (NLI), also known as Recognizing Textual Entailment (RTE), is one of the most important problems in natural language processing. It requires to infer the logical relationship between two given sentences. While current approaches mostly focus on the interaction architectures of the sentences, in this paper, we propose to transfer knowledge from some important discourse markers to augment the quality of the NLI model. We observe that people usually use some discourse markers such as “so” or “but” to represent the logical relationship between two sentences. These words potentially have deep connections with the meanings of the sentences, thus can be utilized to help improve the representations of them. Moreover, we use reinforcement learning to optimize a new objective function with a reward defined by the property of the NLI datasets to make full use of the labels information. Experiments show that our method achieves the state-of-the-art performance on several large-scale datasets. 1 Introduction In this paper, we focus on the task of Natural Language Inference (NLI), which is known as a significant yet challenging task for natural language understanding. In this task, we are given two sentences which are respectively called premise and hypothesis. The goal is to determine whether the logical relationship between them is entailment, neutral, or contradiction. Recently, performance on NLI(Chen et al., 2017b; Gong et al., 2018; Chen et al., 2017c) ∗corresponding author Premise: A soccer game with multiple males playing. Hypothesis: Some men are playing a sport. Label: Entailment Premise: An older and younger man smiling. Hypothesis: Two men are smiling and laughing at the cats playing on the floor. Label: Neutral Premise: A black race car starts up in front of a crowd of people Hypothesis: A man is driving down a lonely road. Label: Contradiction Table 1: Three examples in SNLI dataset. has been significantly boosted since the release of some high quality large-scale benchmark datasets such as SNLI(Bowman et al., 2015) and MultiNLI(Williams et al., 2017). Table 1 shows some examples in SNLI. Most state-of-the-art works focus on the interaction architectures between the premise and the hypothesis, while they rarely concerned the discourse relations of the sentences, which is a core issue in natural language understanding. People usually use some certain set of words to express the discourse relation between two sentences1. These words, such as “but” or “and”, are denoted as discourse markers. These discourse markers have deep connections with the intrinsic relations of two sentences and intuitively correspond to the intent of NLI, such as “but” to “contradiction”, “so” to “entailment”, etc. Very few NLI works utilize this information revealed by discourse markers. Nie et al. (2017) proposed to use discourse markers to help rep1Here sentences mean either the whole sentences or the main clauses of a compound sentence. 990 resent the meanings of the sentences. However, they represent each sentence by a single vector and directly concatenate them to predict the answer, which is too simple and not ideal for the largescale datasets. In this paper, we propose a Discourse Marker Augmented Network for natural language inference, where we transfer the knowledge from the existing supervised task: Discourse Marker Prediction (DMP)(Nie et al., 2017), to an integrated NLI model. We first propose a sentence encoder model that learns the representations of the sentences from the DMP task and then inject the encoder to the NLI network. Moreover, because our NLI datasets are manually annotated, each example from the datasets might get several different labels from the annotators although they will finally come to a consensus and also provide a certain label. In consideration of that different confidence level of the final labels should be discriminated, we employ reinforcement learning with a reward defined by the uniformity extent of the original labels to train the model. The contributions of this paper can be summarized as follows. • Unlike previous studies, we solve the task of the natural language inference via transferring knowledge from another supervised task. We propose the Discourse Marker Augmented Network to combine the learned encoder of the sentences with the integrated NLI model. • According to the property of the datasets, we incorporate reinforcement learning to optimize a new objective function to make full use of the labels’ information. • We conduct extensive experiments on two large-scale datasets to show that our method achieves better performance than other stateof-the-art solutions to the problem. 2 Task Description 2.1 Natural Language Inference (NLI) In the natural language inference tasks, we are given a pair of sentences (P, H), which respectively means the premise and hypothesis. Our goal is to judge whether their logical relationship between their meanings by picking a label from a small set: entailment (The hypothesis is definitely a true description of the premise), neutral (The hypothesis might be a true description of the premise), and contradiction (The hypothesis is definitely a false description of the premise). 2.2 Discourse Marker Prediction (DMP) For DMP, we are given a pair of sentences (S1, S2), which is originally the first half and second half of a complete sentence. The model must predict which discourse marker was used by the author to link the two ideas from a set of candidates. 3 Sentence Encoder Model Following (Nie et al., 2017; Kiros et al., 2015), we use BookCorpus(Zhu et al., 2015) as our training data for discourse marker prediction, which is a dataset of text from unpublished novels, and it is large enough to avoid bias towards any particular domain or application. After preprocessing, we obtain a dataset with the form (S1, S2, m), which means the first half sentence, the last half sentence, and the discourse marker that connected them in the original text. Our goal is to predict the m given S1 and S2. We first use Glove(Pennington et al., 2014) to transform {St}2 t=1 into vectors word by word and subsequently input them to a bi-directional LSTM: −→ hi t = −−−−→ LSTM(Glove(Si t)), i = 1, ..., |St| ←− hi t = ←−−−− LSTM(Glove(Si t)), i = |St|, ..., 1 (1) where Glove(w) is the embedding vector of the word w from the Glove lookup table, |St| is the length of the sentence St. We apply max pooling on the concatenation of the hidden states from both directions, which provides regularization and shorter back-propagation paths(Collobert and Weston, 2008), to extract the features of the whole sequences of vectors: −→ rt = Maxdim([ −→ h1 t ; −→ h2 t ; ...; −−→ h|St| t ]) ←− rt = Maxdim([ ←− h1 t ; ←− h2 t ; ...; ←−− h|St| t ]) (2) where Maxdim means that the max pooling is performed across each dimension of the concatenated vectors, [; ] denotes concatenation. Moreover, we combine the last hidden state from both directions and the results of max pooling to represent our sentences: rt = [−→ rt; ←− rt; −−→ h|St| t ; ←− h1 t ] (3) 991 Discourse Marker Prediction(DMP) Natural Language Inference(NLI) Combination Interaction Layer Glove Glove Glove Glove Char Char POS POS NER NER EM EM BiLSTM BiLSTM BiLSTM Prediction Prediction Sentence Representations Sentence Representations Sentence1 Sentence2 Premise Hypothesis Transferring Figure 1: Overview of our Discource Marker Augmented Network, comprising the part of Discourse Marker Prediction (upper) for pre-training and Natural Language Inferance (bottom) to which the learned knowledge will be transferred. where rt is the representation vector of the sentence St. To predict the discource marker between S1 and S2, we combine the representations of them with some linear operation: r = [r1; r2; r1 + r2; r1 ⊙r2] (4) where ⊙is elementwise product. Finally we project r to a vector of label size (the total number of discourse markers in the dataset) and use softmax function to normalize the probability distribution. 4 Discourse Marker Augmented Network As presented in Figure 1, we show how our Discourse Marker Augmented Network incorporates the learned encoder into the NLI model. 4.1 Encoding Layer We denote the premise as P and the hypothesis as H. To encode the words, we use the concatenation of following parts: Word Embedding: Similar to the previous section, we map each word to a vector space by using pre-trained word vectors GloVe. Character Embedding: We apply Convolutional Neural Networks (CNN) over the characters of each word. This approach is proved to be helpful in handling out-of-vocab (OOV) words(Yang et al., 2017). POS and NER tags: We use the part-of-speech (POS) tags and named-entity recognition (NER) tags to get syntactic information and entity label of the words. Following (Pan et al., 2017b), we apply the skip-gram model(Mikolov et al., 2013) to train two new lookup tables of POS tags and NER tags respectively. Each word can get its own POS embedding and NER embedding by these lookup tables. This approach represents much better geometrical features than common used one-hot vectors. Exact Match: Inspired by the machine comprehension tasks(Chen et al., 2017a), we want to know whether every word in P is in H (and H in P). We use three binary features to indicate whether the word can be exactly matched to any question word, which respectively means original form, lowercase and lemma form. For encoding, we pass all sequences of vectors into a bi-directional LSTM and obtain: pi = BiLSTM(frep(Pi), pi−1), i = 1, ..., n uj = BiLSTM(frep(Hj), uj−1), j = 1, ..., m (5) where frep(x) = [Glove(x); Char(x); POS(x); NER(x); EM(x)] is the concatenation of the embedding vectors and the feature vectors of the word x, n = |P|, m = |H|. 4.2 Interaction Layer In this section, we feed the results of the encoding layer and the learned sentence encoder into the attention mechanism, which is responsible for linking and fusing information from the premise and the hypothesis words. 992 We first obtain a similarity matrix A ∈Rn×m between the premise and hypothesis by Aij = v⊤ 1 [pi; uj; pi ◦uj; rp; rh] (6) where v1 is the trainable parameter, rp and rh are sentences representations from the equation (3) learned in the Section 3, which denote the premise and hypothesis respectively. In addition to previous popular similarity matrix, we incorporate the relevance of each word of P(H) to the whole sentence of H(P). Now we use A to obtain the attentions and the attended vectors in both directions. To signify the attention of the i-th word of P to every word of H, we use the weighted sum of uj by Ai:: ˜ui = X j Aij · uj (7) where ˜ui is the attention vector of the i-th word of P for the entire H. In the same way, the ˜pj is obtained via: ˜pj = X i Aij · pi (8) To model the local inference between aligned word pairs, we integrate the attention vectors with the representation vectors via: ˆpi = f([pi; ˜ui; pi −˜ui; pi ⊙˜ui]) ˆuj = f([uj; ˜pj; uj −˜pj; uj ⊙˜pj]) (9) where f is a 1-layer feed-forward neural network with the ReLU activation function, ˆpi and ˆuj are local inference vectors. Inspired by (Seo et al., 2016) and (Chen et al., 2017b), we use a modeling layer to capture the interaction between the premise and the hypothesis. Specifically, we use bi-directional LSTMs as building blocks: pM i = BiLSTM(ˆpi, pM i−1) uM j = BiLSTM(ˆuj, uM j−1) (10) Here, pM i and uM j are the modeling vectors which contain the crucial information and relationship among the sentences. We compute the representation of the whole sentence by the weighted average of each word: pM = X i exp(v⊤ 2 pM i ) P i′ exp(v⊤ 2 pM i′ )pM i uM = X j exp(v⊤ 3 uM j ) P j′ exp(v⊤ 3 uM j′ )uM j (11) Label SNLI MultiNLI Number Correct Total Correct Total 1 510711 510711 392702 392702 2 0 0 0 0 3 8748 0 3045 0 4 16395 2199 4859 0 5 33179 56123 11743 19647 Table 2: Statistics of the labels of SNLI and MuliNLI. Total means the number of examples whose number of annotators is in the left column. Correct means the number of examples whose number of correct labels from the annotators is in the left column. where v2, v3 are trainable vectors. We don’t share these parameter vectors in this seemingly parallel strucuture because there is some subtle difference between the premise and hypothesis, which will be discussed later in Section 5. 4.3 Output Layer The NLI task requires the model to predict the logical relation from the given set: entailment, neutral or contradiction. We obtain the probability distribution by a linear function with softmax function: d = softmax(W[pM; uM; pM ⊙uM; rp ⊙rh]) (12) where W is a trainable parameter. We combine the representations of the sentences computed above with the representations learned from DMP to obtain the final prediction. 4.4 Training As shown in Table 2, many examples from our datasets are labeled by several people, and the choices of the annotators are not always consistent. For instance, when the label number is 3 in SNLI, “total=0” means that no examples have 3 annotators (maybe more or less); “correct=8748” means that there are 8748 examples whose number of correct labels is 3 (the number of annotators maybe 4 or 5, but some provided wrong labels). Although all the labels for each example will be unified to a final (correct) label, diversity of the labels for a single example indicates the low confidence of the result, which is not ideal to only use the final label to optimize the model. 993 We propose a new objective function that combines both the log probabilities of the ground-truth label and a reward defined by the property of the datasets for the reinforcement learning. The most widely used objective function for the natural language inference is to minimize the negative log cross-entropy loss: JCE(Θ) = −1 N N X k log(dk l ) (13) where Θ are all the parameters to optimize, N is the number of examples in the dataset, dl is the probability of the ground-truth label l. However, directly using the final label to train the model might be difficult in some situations, where the example is confusing and the labels from the annotators are different. For instance, consider an example from the SNLI dataset: • P: “A smiling costumed woman is holding an umbrella.” • H: “A happy woman in a fairy costume holds an umbrella.” The final label is neutral, but the original labels from the five annotators are neural, neural, entailment, contradiction, neural, in which case the relation between “smiling” and “happy” might be under different comprehension. The final label’s confidence of this example is obviously lower than an example that all of its labels are the same. To simulate the thought of human being more closely, in this paper, we tackle this problem by using the REINFORCE algorithm(Williams, 1992) to minimize the negative expected reward, which is defined as: JRL(Θ) = −El∼π(l|P,H)[R(l, {l∗})] (14) where π(l|P, H) is the previous action policy that predicts the label given P and H, {l∗} is the set of annotated labels, and R(l, {l∗}) = number of l in {l∗} |{l∗}| (15) is the reward function defined to measure the distance to all the ideas of the annotators. To avoid of overwriting its earlier results and further stabilize training, we use a linear function to integrate the above two objective functions: J(Θ) = λJCE(Θ) + (1 −λ)JRL(Θ) (16) where λ is a tunable hyperparameter. Discourse Marker Percentage(%) but 57.12 because 9.41 if 29.78 when 25.32 so 31.01 although 1.76 before 15.52 still 11.29 Table 3: Statistics of discouse markers in our dataset from BookCorpus. 5 Experiments 5.1 Datasets BookCorpus: We use the dataset from BookCorpus(Zhu et al., 2015) to pre-train our sentence encoder model. We preprocessed and collected discourse markers from BookCorpus as (Nie et al., 2017). We finally curated a dataset of 6527128 pairs of sentences for 8 discourse markers, whose statistics are shown in Table 3. SNLI: Stanford Natural Language Inference(Bowman et al., 2015) is a collection of more than 570k human annotated sentence pairs labeled for entailment, contradiction, and semantic independence. SNLI is two orders of magnitude larger than all other resources of its type. The premise data is extracted from the captions of the Flickr30k corpus(Young et al., 2014), the hypothesis data and the labels are manually annotated. The original SNLI corpus contains also the other category, which includes the sentence pairs lacking consensus among multiple human annotators. We remove this category and use the same split as in (Bowman et al., 2015) and other previous work. MultiNLI: Multi-Genre Natural Language Inference(Williams et al., 2017) is another large-scale corpus for the task of NLI. MultiNLI has 433k sentences pairs and is in the same format as SNLI, but it includes a more diverse range of text, as well as an auxiliary test set for cross-genre transfer evaluation. Half of these selected genres appear in training set while the rest are not, creating in-domain (matched) and cross-domain (mismatched) development/test sets. 994 Method SNLI MultiNLI Matched Mismatched 300D LSTM encoders(Bowman et al., 2016) 80.6 – – 300D Tree-based CNN encoders(Mou et al., 2016) 82.1 – – 4096D BiLSTM with max-pooling(Conneau et al., 2017) 84.5 – – 600D Gumbel TreeLSTM encoders(Choi et al., 2017) 86.0 – – 600D Residual stacked encoders(Nie and Bansal, 2017) 86.0 74.6 73.6 Gated-Att BiLSTM(Chen et al., 2017d) – 73.2 73.6 100D LSTMs with attention(Rockt¨aschel et al., 2016) 83.5 – – 300D re-read LSTM(Sha et al., 2016) 87.5 – – DIIN(Gong et al., 2018) 88.0 78.8 77.8 Biattentive Classification Network(McCann et al., 2017) 88.1 – – 300D CAFE(Tay et al., 2017) 88.5 78.7 77.9 KIM(Chen et al., 2017b) 88.6 – – 600D ESIM + 300D Syntactic TreeLSTM(Chen et al., 2017c) 88.6 – – DMAN 88.8 78.9 78.2 BiMPM(Ensemble)(Wang et al., 2017) 88.8 – – DIIN(Ensemble)(Gong et al., 2018) 88.9 80.0 78.7 KIM(Ensemble)(Chen et al., 2017b) 89.1 – – 300D CAFE(Ensemble)(Tay et al., 2017) 89.3 80.2 79.0 DMAN(Ensemble) 89.6 80.3 79.4 Table 4: Performance on the SNLI dataset and the MultiNLI dataset. In the top part, we show sentence encoding-based models; In the medium part, we present the performance of integrated neural network models; In the bottom part, we show the results of ensemble models. 5.2 Implementation Details We use the Stanford CoreNLP toolkit(Manning et al., 2014) to tokenize the words and generate POS and NER tags. The word embeddings are initialized by 300d Glove(Pennington et al., 2014), the dimensions of POS and NER embeddings are 30 and 10. The dataset we use to train the embeddings of POS tags and NER tags are the training set given by SNLI. We apply Tensorflow r1.3 as our neural network framework. We set the hidden size as 300 for all the LSTM layers and apply dropout(Srivastava et al., 2014) between layers with an initial ratio of 0.9, the decay rate as 0.97 for every 5000 step. We use the AdaDelta for optimization as described in (Zeiler, 2012) with ρ as 0.95 and ϵ as 1e-8. We set our batch size as 36 and the initial learning rate as 0.6. The parameter λ in the objective function is set to be 0.2. For DMP task, we use stochastic gradient descent with initial learning rate as 0.1, and we anneal by half each time the validation accuracy is lower than the previous epoch. The number of epochs is set to be 10, and the feedforward dropout rate is 0.2. The learned encoder in subsequent NLI task is trainable. 5.3 Results In table 4, we compare our model to other competitive published models on SNLI and MultiNLI. As we can see, our method Discourse Marker Augmented Network (DMAN) clearly outperforms all the baselines and achieves the state-of-the-art results on both datasets. The methods in the top part of the table are sentence encoding based models. Bowman et al. (2016) proposed a simple baseline that uses LSTM to encode the whole sentences and feed them into a MLP classifier to predict the final inference relationship, they achieve an accuracy of 80.6% on SNLI. Nie and Bansal (2017) test their model on both SNLI and MiltiNLI, and achieves competitive results. In the medium part, we show the results of other neural network models. Obviously, the performance of most of the integrated methods are better than the sentence encoding based models above. Both DIIN(Gong et al., 2018) and 995 Ablation Model Accuracy Only Sentence Encoder Model 83.37 No Sentence Encoder Model 87.24 No Char Embedding 87.95 No POS Embedding 88.76 No NER Embedding 88.71 No Exact Match 88.26 No REINFORCE 88.41 DMAN 88.83 Table 5: Ablations on the SNLI development dataset. CAFE(Tay et al., 2017) exceed other methods by more than 4% on MultiNLI dataset. However, our DMAN achieves 88.8% on SNLI, 78.9% on matched MultiNLI and 78.2% on mismatched MultiNLI, which are all best results among the baselines. We present the ensemble results on both datasets in the bottom part of the table 4. We build an ensemble model which consists of 10 single models with the same architecture but initialized with different parameters. The performance of our model achieves 89.6% on SNLI, 80.3% on matched MultiNLI and 79.4% on mismatched MultiNLI, which are all state-of-the-art results. 5.4 Ablation Analysis As shown in Table 5, we conduct an ablation experiment on SNLI development dataset to evaluate the individual contribution of each component of our model. Firstly we only use the results of the sentence encoder model to predict the answer, in other words, we represent each sentence by a single vector and use dot product with a linear function to do the classification. The result is obviously not satisfactory, which indicates that only using sentence embedding from discourse markers to predict the answer is not ideal in large-scale datasets. We then remove the sentence encoder model, which means we don’t use the knowledge transferred from the DMP task and thus the representations rp and rh are set to be zero vectors in the equation (6) and the equation (12). We observe that the performance drops significantly to 87.24%, which is nearly 1.5% to our DMAN model, which indicates that the discourse markers have deep connections with the logical relations between two sentences they links. When Figure 2: Performance when the sentence encoder is pretrained on different discourse markers sets. “NONE” means the model doesn’t use any discourse markers; “ALL” means the model use all the discourse markers. we remove the character-level embedding and the POS and NER features, the performance drops a lot. We conjecture that those feature tags help the model represent the words as a whole while the char-level embedding can better handle the outof-vocab (OOV) or rare words. The exact match feature also demonstrates its effectiveness in the ablation result. Finally, we ablate the reinforcement learning part, in other words, we only use the original loss function to optimize the model (set λ = 1). The result drops about 0.5%, which proves that it is helpful to utilize all the information from the annotators. 5.5 Semantic Analysis In Figure 2, we show the performance on the three relation labels when the model is pre-trained on different discourse markers sets. In other words, we removed discourse marker from the original set each time and use the rest 7 discourse markers to pre-train the sentence encoder in the DMP task and then train the DMAN. As we can see, there is a sharp decline of accuracy when removing “but”, “because” and “although”. We can intuitively speculate that “but” and “although” have direct connections with the contradiction label (which drops most significantly) while “because” has some links with the entailment label. We observe that some discourse markers such as “if” or “before” contribute much less than other words which have strong logical hints, although they 996 (a) Discourse markers augmentation (b) Without discourse markers augmentation Figure 3: Comparison of the visualized similarity relations. actually improve the performance of the model. Compared to the other two categories, the “contradiction” label examples seem to benefit the most from the pre-trained sentence encoder. 5.6 Visualization In Figure 3, we also provide a visualized analysis of the hidden representation from similarity matrix A (computed in the equation (6)) in the situations that whether we use the discourse markers or not. We pick a sentence pair whose premise is “3 young man in hoods standing in the middle of a quiet street facing the camera.” and hypothesis is “Three people sit by a busy street bareheaded.” We observe that the values are highly correlated among the synonyms like “people” with “man”, “three” with “3” in both situations. However, words that might have contradictory meanings like “hoods” with “bareheaded”, “quiet” with “busy” perform worse without the discourse markers augmentation, which conforms to the conclusion that the “contradiction” label examples benefit a lot which is observed in the Section 5.5. 6 Related Work 6.1 Discourse Marker Applications This work is inspired most directly by the DisSent model and Discourse Prediction Task of Nie et al. (2017), which introduce the use of the discourse markers information for the pretraining of sentence encoders. They follow (Kiros et al., 2015) to collect a large sentence pairs corpus from BookCorpus(Zhu et al., 2015) and propose a sentence representation based on that. They also apply their pre-trained sentence encoder to a series of natural language understanding tasks such as sentiment analysis, question-type, entailment, and relatedness. However, all those datasets are provided by Conneau et al. (2017) for evaluating sentence embeddings and are almost all small-scale and are not able to support more complex neural network. Moreover, they represent each sentence by a single vector and directly combine them to predict the answer, which is not able to interact among the words level. In closely related work, Jernite et al. (2017) propose a model that also leverage discourse relations. However, they manually group the discourse markers into several categories based on human knowledge and predict the category instead of the explicit discourse marker phrase. However, the size of their dataset is much smaller than that in (Nie et al., 2017), and sometimes there has been disagreement among annotators about what exactly is the correct categorization of discourse relations(Hobbs, 1990). Unlike previous works, we insert the sentence encoder into an integrated network to augment the semantic representation for NLI tasks rather than directly combining the sentence embeddings to predict the relations. 6.2 Natural Language Inference Earlier research on the natural language inference was based on small-scale datasets(Marelli et al., 2014), which relied on traditional methods such as shallow methods(Glickman et al., 2005), natural logic methods(MacCartney and Manning, 2007), etc. These datasets are either not large enough to support complex deep neural network models or too easy to challenge natural language. Large and complicated networks have been successful in many natural language processing tasks(Zhu et al., 2017; Chen et al., 2017e; Pan et al., 2017a). Recently, Bowman et al. (2015) released Stanford Natural language Inference (SNLI) dataset, which is a high-quality and large-scale benchmark, thus inspired many significant works(Bowman et al., 2016; Mou et al., 2016; Vendrov et al., 2016; Conneau et al., 2017; Wang 997 et al., 2017; Gong et al., 2018; McCann et al., 2017; Chen et al., 2017b; Choi et al., 2017; Tay et al., 2017). Most of them focus on the improvement of the interaction architectures and obtain competitive results, while transfer learning from external knowledge is popular as well. Vendrov et al. (2016) incorpated Skipthought(Kiros et al., 2015), which is an unsupervised sequence model that has been proven to generate useful sentence embedding. McCann et al. (2017) proposed to transfer the pre-trained encoder from the neural machine translation (NMT) to the NLI tasks. Our method combines a pre-trained sentence encoder from the DMP task with an integrated NLI model to compose a novel framework. Furthermore, unlike previous studies, we make full use of the labels provided by the annotators and employ policy gradient to optimize a new objective function in order to simulate the thought of human being. 7 Conclusion In this paper, we propose Discourse Marker Augmented Network for the task of the natural language inference. We transfer the knowledge learned from the discourse marker prediction task to the NLI task to augment the semantic representation of the model. Moreover, we take the various views of the annotators into consideration and employ reinforcement learning to help optimize the model. The experimental evaluation shows that our model achieves the state-of-the-art results on SNLI and MultiNLI datasets. Future works involve the choice of discourse markers and some other transfer learning sources. 8 Acknowledgements This work was supported in part by the National Nature Science Foundation of China (Grant Nos: 61751307), in part by the grant ZJU Research 083650 of the ZJUI Research Program from Zhejiang University and in part by the National Youth Top-notch Talent Support Program. The experiments are supported by Chengwei Yao in the Experiment Center of the College of Computer Science and Technology, Zhejiang university. References Samuel R Bowman, Gabor Angeli, Christopher Potts, and Christopher D Manning. 2015. A large annotated corpus for learning natural language inference. In Proceedings of the 2015 Conference on Empirical Methods in Natural Language Processing (EMNLP), pages 632–642. Samuel R Bowman, Jon Gauthier, Abhinav Rastogi, Raghav Gupta, Christopher D Manning, and Christopher Potts. 2016. A fast unified model for parsing and sentence understanding. In Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics (ACL), volume 1, pages 1466–1477. Danqi Chen, Adam Fisch, Jason Weston, and Antoine Bordes. 2017a. Reading wikipedia to answer opendomain questions. In Meeting of the Association for Computational Linguistics (ACL), pages 1870– 1879. Qian Chen, Xiaodan Zhu, Zhen-Hua Ling, and Diana Inkpen. 2017b. Natural language inference with external knowledge. arXiv preprint arXiv:1711.04289. Qian Chen, Xiaodan Zhu, Zhen-Hua Ling, Si Wei, Hui Jiang, and Diana Inkpen. 2017c. Enhanced lstm for natural language inference. In Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (ACL), volume 1, pages 1657– 1668. Qian Chen, Xiaodan Zhu, Zhen-Hua Ling, Si Wei, Hui Jiang, and Diana Inkpen. 2017d. Recurrent neural network-based sentence encoder with gated attention for natural language inference. In Proceedings of the 2nd Workshop on Evaluating Vector Space Representations for NLP, pages 36–40. Zheqian Chen, Ben Gao, Huimin Zhang, Zhou Zhao, Haifeng Liu, and Deng Cai. 2017e. User personalized satisfaction prediction via multiple instance deep learning. In International Conference on World Wide Web (WWW), pages 907–915. Jihun Choi, Kang Min Yoo, and Sang goo Lee. 2017. Learning to compose task-specific tree structures. The Association for the Advancement of Artificial Intelligence (AAAI). Ronan Collobert and Jason Weston. 2008. A unified architecture for natural language processing: Deep neural networks with multitask learning. In Proceedings of the 25th international conference on Machine learning (ICML), pages 160–167. ACM. Alexis Conneau, Douwe Kiela, Holger Schwenk, Lo¨ıc Barrault, and Antoine Bordes. 2017. Supervised learning of universal sentence representations from natural language inference data. In Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing (EMNLP), pages 670–680. Oren Glickman, Ido Dagan, and Moshe Koppel. 2005. Web based probabilistic textual entailment. 998 Yichen Gong, Heng Luo, and Jian Zhang. 2018. Natural language inference over interaction space. In International Conference on Learning Representations (ICLR). Jerry R Hobbs. 1990. Literature and cognition. 21. Center for the Study of Language (CSLI). Yacine Jernite, Samuel R Bowman, and David Sontag. 2017. Discourse-based objectives for fast unsupervised sentence representation learning. arXiv preprint arXiv:1705.00557. Ryan Kiros, Yukun Zhu, Ruslan R Salakhutdinov, Richard Zemel, Raquel Urtasun, Antonio Torralba, and Sanja Fidler. 2015. Skip-thought vectors. In Advances in neural information processing systems (NIPS), pages 3294–3302. Bill MacCartney and Christopher D Manning. 2007. Natural logic for textual inference. In Proceedings of the ACL-PASCAL Workshop on Textual Entailment and Paraphrasing, pages 193–200. Association for Computational Linguistics. Christopher Manning, Mihai Surdeanu, John Bauer, Jenny Finkel, Steven Bethard, and David McClosky. 2014. The stanford corenlp natural language processing toolkit. In Proceedings of 52nd annual meeting of the association for computational linguistics: system demonstrations, pages 55–60. Marco Marelli, Stefano Menini, Marco Baroni, Luisa Bentivogli, Raffaella Bernardi, Roberto Zamparelli, et al. 2014. A sick cure for the evaluation of compositional distributional semantic models. In LREC, pages 216–223. Bryan McCann, James Bradbury, Caiming Xiong, and Richard Socher. 2017. Learned in translation: Contextualized word vectors. In Advances in Neural Information Processing Systems (NIPS), pages 6297– 6308. Tomas Mikolov, Ilya Sutskever, Kai Chen, Greg S Corrado, and Jeff Dean. 2013. Distributed representations of words and phrases and their compositionality. In Advances in neural information processing systems (NIPS), pages 3111–3119. Lili Mou, Rui Men, Ge Li, Yan Xu, Lu Zhang, Rui Yan, and Zhi Jin. 2016. Natural language inference by tree-based convolution and heuristic matching. In Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics (Volume 2: Short Papers), volume 2, pages 130–136. Allen Nie, Erin D Bennett, and Noah D Goodman. 2017. Dissent: Sentence representation learning from explicit discourse relations. arXiv preprint arXiv:1710.04334. Yixin Nie and Mohit Bansal. 2017. Shortcutstacked sentence encoders for multi-domain inference. arXiv preprint arXiv:1708.02312. Boyuan Pan, Hao Li, Zhou Zhao, Deng Cai, and Xiaofei He. 2017a. Keyword-based query comprehending via multiple optimized-demand augmentation. arXiv preprint arXiv:1711.00179. Boyuan Pan, Hao Li, Zhou Zhao, Bin Cao, Deng Cai, and Xiaofei He. 2017b. Memen: Multi-layer embedding with memory networks for machine comprehension. arXiv preprint arXiv:1707.09098. Jeffrey Pennington, Richard Socher, and Christopher D. Manning. 2014. Glove: Global vectors for word representation. In Empirical Methods in Natural Language Processing (EMNLP), pages 1532– 1543. Tim Rockt¨aschel, Edward Grefenstette, Karl Moritz Hermann, Tom´aˇs Koˇcisk`y, and Phil Blunsom. 2016. Reasoning about entailment with neural attention. International Conference on Learning Representations (ICLR). Minjoon Seo, Aniruddha Kembhavi, Ali Farhadi, and Hannaneh Hajishirzi. 2016. Bidirectional attention flow for machine comprehension. arXiv preprint arXiv:1611.01603. Lei Sha, Baobao Chang, Zhifang Sui, and Sujian Li. 2016. Reading and thinking: Re-read lstm unit for textual entailment recognition. In Proceedings of COLING 2016, the 26th International Conference on Computational Linguistics: Technical Papers, pages 2870–2879. Nitish Srivastava, Geoffrey Hinton, Alex Krizhevsky, Ilya Sutskever, and Ruslan Salakhutdinov. 2014. Dropout: A simple way to prevent neural networks from overfitting. The Journal of Machine Learning Research, 15(1):1929–1958. Yi Tay, Luu Anh Tuan, and Siu Cheung Hui. 2017. A compare-propagate architecture with alignment factorization for natural language inference. arXiv preprint arXiv:1801.00102. Ivan Vendrov, Ryan Kiros, Sanja Fidler, and Raquel Urtasun. 2016. Order-embeddings of images and language. International Conference on Learning Representations (ICLR). Zhiguo Wang, Wael Hamza, and Radu Florian. 2017. Bilateral multi-perspective matching for natural language sentences. International Joint Conference on Artificial Intelligence (IJCAI). Adina Williams, Nikita Nangia, and Samuel R Bowman. 2017. A broad-coverage challenge corpus for sentence understanding through inference. arXiv preprint arXiv:1704.05426. Ronald J Williams. 1992. Simple statistical gradientfollowing algorithms for connectionist reinforcement learning. Machine Learning, 8(3-4):229–256. 999 Zhilin Yang, Bhuwan Dhingra, Ye Yuan, Junjie Hu, William W. Cohen, and Ruslan Salakhutdinov. 2017. Words or characters? fine-grained gating for reading comprehension. International Conference on Learning Representations (ICLR). Peter Young, Alice Lai, Micah Hodosh, and Julia Hockenmaier. 2014. From image descriptions to visual denotations: New similarity metrics for semantic inference over event descriptions. Transactions of the Association for Computational Linguistics, 2:67–78. Matthew D Zeiler. 2012. Adadelta: an adaptive learning rate method. arXiv preprint arXiv:1212.5701. Yu Zhu, Hao Li, Yikang Liao, Beidou Wang, Ziyu Guan, Haifeng Liu, and Deng Cai. 2017. What to do next: modeling user behaviors by time-lstm. In Proceedings of the Twenty-Sixth International Joint Conference on Artificial Intelligence (IJCAI), pages 3602–3608. Yukun Zhu, Ryan Kiros, Rich Zemel, Ruslan Salakhutdinov, Raquel Urtasun, Antonio Torralba, and Sanja Fidler. 2015. Aligning books and movies: Towards story-like visual explanations by watching movies and reading books. In Proceedings of the IEEE international conference on computer vision (ICCV), pages 19–27.
2018
91
Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Long Papers), pages 1000–1009 Melbourne, Australia, July 15 - 20, 2018. c⃝2018 Association for Computational Linguistics 1000 Working Memory Networks: Augmenting Memory Networks with a Relational Reasoning Module Juan Pavez*, H´ector Allende Department of Informatics Federico Santa Mar´ıa Technical University Valpara´ıso, Chile [email protected] [email protected] H´ector Allende-Cid Escuela de Ingenier´ıa Inform´atica Pont´ıfica Universidad Cat´olica de Valpara´ıso Valpara´ıso, Chile [email protected] Abstract During the last years, there has been a lot of interest in achieving some kind of complex reasoning using deep neural networks. To do that, models like Memory Networks (MemNNs) have combined external memory storages and attention mechanisms. These architectures, however, lack of more complex reasoning mechanisms that could allow, for instance, relational reasoning. Relation Networks (RNs), on the other hand, have shown outstanding results in relational reasoning tasks. Unfortunately, their computational cost grows quadratically with the number of memories, something prohibitive for larger problems. To solve these issues, we introduce the Working Memory Network, a MemNN architecture with a novel working memory storage and reasoning module. Our model retains the relational reasoning abilities of the RN while reducing its computational complexity from quadratic to linear. We tested our model on the text QA dataset bAbI and the visual QA dataset NLVR. In the jointly trained bAbI-10k, we set a new state-of-the-art, achieving a mean error of less than 0.5%. Moreover, a simple ensemble of two of our models solves all 20 tasks in the joint version of the benchmark. 1 Introduction A central ability needed to solve daily tasks is complex reasoning. It involves the capacity to comprehend and represent the environment, retain information from past experiences, and solve problems based on the stored information. Our ability to solve those problems is supported by multiple specialized components, including shortterm memory storage, long-term semantic and procedural memory, and an executive controller that, among others, controls the attention over memories (Baddeley, 1992). Many promising advances for achieving complex reasoning with neural networks have been obtained during the last years. Unlike symbolic approaches to complex reasoning, deep neural networks can learn representations from perceptual information. Because of that, they do not suffer from the symbol grounding problem (Harnad, 1999), and can generalize better than classical symbolic approaches. Most of these neural network models make use of an explicit memory storage and an attention mechanism. For instance, Memory Networks (MemNN), Dynamic Memory Networks (DMN) or Neural Turing Machines (NTM) (Weston et al., 2014; Kumar et al., 2016; Graves et al., 2014) build explicit memories from the perceptual inputs and access these memories using learned attention mechanisms. After that some memories have been attended, using a multi-step procedure, the attended memories are combined and passed through a simple output layer that produces a final answer. While this allows some multi-step inferential process, these networks lack a more complex reasoning mechanism, needed for more elaborated tasks such as inferring relations among entities (relational reasoning). On the contrary, Relation Networks (RNs), proposed in Santoro et al. (2017), have shown outstanding performance in relational reasoning tasks. Nonetheless, a major drawback of RNs is that they consider each of the input objects in pairs, having to process a quadratic number of relations. That limits the usability of the model on large problems and makes forward and backward computations quite expensive. To solve these problems we propose a novel Memory Network 1001 Figure 1: The W-MemNN model applied to textual question answering. Each input fact is processed using a GRU, and the output representation is stored in the short-term memory storage. Then, the attentional controller computes an output vector that summarizes relevant parts of the memories. This process is repeated H hops (a dotted line delimits each hop), and each output is stored in the working memory buffer. Finally, the output of each hop is passed to the reasoning module that produces the final output. architecture called the Working Memory Network (W-MemNN). Our model augments the original MemNN with a relational reasoning module and a new working memory buffer. The attention mechanism of the Memory Network allows the filtering of irrelevant inputs, reducing a lot of the computational complexity while keeping the relational reasoning capabilities of the RN. Three main components compose the W-MemNN: An input module that converts the perceptual inputs into an internal vector representation and save these representations into a short-term storage, an attentional controller that attend to these internal representations and update a working memory buffer, and a reasoning module that operates on the set of objects stored in the working memory buffer in order to produce a final answer. This component-based architecture is inspired by the well-known model from cognitive sciences called the multi-component working memory model, proposed in Baddeley and Hitch (1974). We studied the proposed model on the text-based QA benchmark bAbI (Weston et al., 2015) which consists of 20 different toy tasks that measure different reasoning skills. While models such as EntNet (Henaff et al., 2016) have focused on the pertask training version of the benchmark (where a different model is trained for each task), we decided to focus on the jointly trained version of the task, where the model is trained on all tasks simultaneously. In the jointly trained bAbI-10k benchmark we achieved state-of-the-art performance, improving the previous state-of-the-art on more than 2%. Moreover, a simple ensemble of two of our models can solve all 20 tasks simultaneously. Also, we tested our model on the visual QA dataset NLVR. In that dataset, we obtained performance at the level of the Module Neural Networks (Andreas et al., 2016). Our model, however, achieves these results using the raw input statements, without the extra text processing used in the Module Networks. Finally, qualitative and quantitative analysis shows that the inclusion of the Relational Reasoning module is crucial to improving the performance of the MemNN on tasks that involve relational reasoning. We can achieve this performance by also reducing the computation times of the RN considerably. Consequently, we hope that this contribution may allow applying RNs to larger problems. 2 Model Our model is based on the Memory Network architecture. Unlike MemNN we have included a reasoning module that helps the network to solve more complex tasks. The proposed model consists of three main modules: An input module, an at1002 tentional controller, and a reasoning module. The model processes the input information in multiple passes or hops. At each pass the output of the previous hop can condition the current pass, allowing some incremental refinement. Input module: The input module converts the perceptual information into an internal feature representation. The input information can be processed in chunks, and each chunk is saved into a short-term storage. The definition of what is a chunk of information depends on each task. For instance, for textual question answering, we define each chunk as a sentence. Other options might be n-grams or full documents. This short-term storage can only be accessed during the hop. Attentional Controller: The attentional controller decides in which parts of the short-term storage the model should focus. The attended memories are kept during all the hops in a working memory buffer. The attentional controller is conditioned by the task at hand, for instance, in question answering the question can condition the attention. Also, it may be conditioned by the output of previous hops, allowing the model to change its focus to new portions of the memory over time. Many models compute the attention for each memory using a compatibility function between the memory and the question. Then, the output is calculated as the weighted sum of the memory values, using the attention as weight. A simple way to compute the attention for each memory is to use dot-product attention. This kind of mechanism is used in the original Memory Network and computes the attention value as the dot product between each memory and the question. Although this kind of attention is simple, it may not be enough for more complex tasks. Also, since there are no learned weights in the attention mechanism, the attention relies entirely on the learned embeddings. That is something that we want to avoid in order to separate the learning of the input and attention module. One way to allow learning in the dot-product attention is to project the memories and query vectors linearly. That is done by multiplying each vector by a learned projection matrix (or equivalently a feed-forward neural network). In this way, we can set apart the attention and input embeddings learning, and also allow more complex patterns of attention. Reasoning Module: The memories stored in the working memory buffer are passed to the reasoning module. The choice of reasoning mechanism is left open and may depend on the task at hand. In this work, we use a Relation Network as the reasoning module. The RN takes the attended memories in pairs to infer relations among the memories. That can be useful, for example, in tasks that include comparisons. A detailed description of the full model is shown in Figure 1. 2.1 W-MemN2N for Textual Question Answering We proceed to describe an implementation of the model for textual question answering. In textual question answering the input consists of a set of sentences or facts, a question, and an answer. The goal is to answer the question correctly based on the given facts. Let (s, q, a) represents an input sample, consisting of a set of sentences s = {xi}L i=1, a query q and an answer a. Each sentence contains M words, {wi}M i=1, where each word is represented as a onehot vector of length |V |, being |V | the vocabulary size. The question contains Q words, represented as in the input sentences. Input Module Each word in each sentence is encoded into a vector representation vi using an embedding matrix W ∈R|V |×d, where d is the embedding size. Then, the sentence is converted into a memory vector mi using the final output of a gated recurrent neural network (GRU) (Chung et al., 2014): mi = GRU([v1, v2, ..., vM]) Each memory {mi}L i=1, where mi ∈Rd, is stored into the short-term memory storage. The question is encoded into a vector u in a similar way, using the output of a gated recurrent network. Attentional Controller Our attention module is based on the Multi-Head attention mechanism proposed in Vaswani et al. (2017). First, the memories are projected using a projection matrix Wm ∈Rd×d, as m′ i = Wmmi. Then, the similarity between the projected memory and the question is computed using the Scaled Dot-Product attention: αi = Softmax uT m′ i √ d  (1) = exp((uT m′ i)/ √ d) P j exp((uT m′ j)/ √ d) . (2) 1003 Next, the memories are combined using the attention weights αi, obtaining an output vector h = P j αjmj. In the Multi-Head mechanism, the memories are projected S times using different projection matrices {W s m}S s=1. For each group of projected memories, an output vector {hi}S i=1 is obtained using the Scaled Dot-Product attention (eq. 2). Finally, all vector outputs are concatenated and projected again using a different matrix: ok = [h1; h2; ...; hS]Wo, where ; is the concatenation operator and Wo ∈ RSd×d. The ok vector is the final response vector for the hop k. This vector is stored in the working memory buffer. The attention procedure can be repeated many times (or hops). At each hop, the attention can be conditioned on the previous hop by replacing the question vector u by the output of the previous hop. To do that we pass the output through a simple neural network ft. Then, we use the output of the network as the new conditioner: on k = ft(ok). (3) This network allows some learning in the transition patterns between hops. We found Multi-Head attention to be very useful in the joint bAbI task. This can be a product of the intrinsic multi-task nature of the bAbI dataset. A possibility is that each attention head is being adapted for different groups of related tasks. However, we did not investigate this further. Also, note that while in this section we use the same set of memories at each hop, this is not necessary. For larger sequences each hop can operate in different parts of the input sequence, allowing the processing of the input in various steps. Reasoning Module The outputs stored in the working memory buffer are passed to the reasoning module. The reasoning module used in this work is a Relation Network (RN). In the RN the output vectors are concatenated in pairs together with the question vector. Each pair is passed through a neural network gθ and all the outputs of the network are added to produce a single vector. Then, the sum is passed to a final neural network fφ: r = fφ  X i,j gθ([oi; oj; u])  , (4) The output of the Relation Network is then passed through a final weight matrix and a softmax to produce the predicted answer: ˆa = Softmax(V r), (5) where V ∈R|A|×dφ, |A| is the number of possible answers and dφ is the dimension of the output of fφ. The full network is trained end-to-end using standard cross-entropy between ˆa and the true label a. 3 Related Work 3.1 Memory Augmented Neural Networks During the last years, there has been plenty of work on achieving complex reasoning with deep neural networks. An important part of these developments has used some kind of explicit memory and attention mechanisms. One of the earliest recent work is that of Memory Networks (Weston et al., 2014). Memory Networks work by building an addressable memory from the inputs and then accessing those memories in a series of reading operations. Another, similar, line of work is the one of Neural Turing Machines. They were proposed in Graves et al. (2014) and are the basis for recent neural architectures including the Differentiable Neural Computer (DNC) and the Sparse Access Memory (SAM) (Graves et al., 2016; Rae et al., 2016). The NTM model also uses a content addressable memory, as in the Memory Network, but adds a write operation that allows updating the memory over time. The management of the memory, however, is different from the one of the MemNN. While the MemNN model pre-load the memories using all the inputs, the NTM writes and read the memory one input at a time. An additional model that makes use of explicit external memory is the Dynamic Memory Network (DMN) (Kumar et al., 2016; Xiong et al., 2016). The model shares some similarities with the Memory Network model. However, unlike the MemNN model, it operates in the input sequentially (as in the NTM model). The model defines an Episodic Memory module that makes use of a Gated Recurrent Neural Network (GRU) to store and update an internal state that represents the episodic storage. 3.2 Memory Networks Since our model is based on the MemNN architecture, we proceed to describe it in more detail. The 1004 Memory Network model was introduced in Weston et al. (2014). In that work, the authors proposed a model composed of four components: The input feature map that converts the input into an internal vector representation, the generalization module that updates the memories given the input, the output feature map that produces a new output using the stored memories, and the response module that produces the final answer. The model, as initially proposed, needed some strong supervision that explicitly tells the model which memories to attend. In order to solve that limitation, the End-To-End Memory Network (MemN2N) was proposed in Sukhbaatar et al. (2015). The model replaced the hard-attention mechanism used in the original MemNN by a softattention mechanism that allowed to train it endto-end without strong supervision. In our model, we use a component-based approach, as in the original MemNN architecture. However, there are some differences: First, our model makes use of two external storages: a short-term storage, and a working memory buffer. The first is equivalent to the one updated by the input and generalization module of the MemNN. The working memory buffer, on the other hand, does not have a counterpart in the original model. Second, our model replaces the response module by a reasoning module. Unlike the original MemNN, our reasoning module is intended to make more complex work than the response module, that was only designed to produce a final answer. 3.3 Relation Networks The ability to infer and learn relations between entities is fundamental to solve many complex reasoning problems. Recently, a number of neural network models have been proposed for this task. These include Interaction Networks, Graph Neural Networks, and Relation Networks (Battaglia et al., 2016; Scarselli et al., 2009; Santoro et al., 2017). In specific, Relation Networks (RNs) have shown excellent results in solving textual and visual question answering tasks requiring relational reasoning. The model is relatively simple: First, all the inputs are grouped in pairs and each pair is passed through a neural network. Then, the outputs of the first network are added, and another neural network processes the final vector. The role of the first network is to infer relations among each pair of objects. In Palm et al. (2017) the authors propose a recurrent extension to the RN. By allowing multiple steps of relational reasoning, the model can learn to solve more complex tasks. The main issue with the RN architecture is that its scale very poorly for larger problems. That is because it operates on O(n2) pairs, where n is the number of input objects (for instance, sentences in the case of textual question answering). This becomes quickly prohibitive for tasks involving many input objects. 3.4 Cognitive Science The concept of working memory has been extensively developed in cognitive psychology. It consists of a limited capacity system that allows temporary storage and manipulation of information and is crucial to any reasoning task. One of the most influential models of working memory is the multi-component model of working memory proposed by Baddeley and Hitch (1974). This model is composed both of a supervisory attentional controller (the central executive) and two short-term storage systems: The phonological loop, capable of holding speech-based information, and the visuospatial sketchpad, concerned with visual storage. The central executive plays various functions, including the capacity to focus attention, to divide attention and to control access to long-term memory. Later modifications to the model (Baddeley, 2000) include an episodic buffer that is capable of integrating and holding information from different sources. Connections of the working memory model to memory augmented neural networks have been already studied in Graves et al. (2014). We follow this effort and subdivide our model into components that resemble (in a basic way) the multi-component model of working memory. Note, however, that we use the term working memory buffer instead of episodic buffer. That is because the episodic buffer has an integration function that our model does not cover. However, that can be an interesting source of inspiration for next versions of the model that integrate both visual and textual information for question answering. 4 Experiments 4.1 Textual Question Answering To evaluate our model on textual question answering we used the Facebook bAbI-10k dataset (Weston et al., 2015). The bAbI dataset is a textual 1005 LSTM MN-S MN SDNC WMN WMN† 1: 1 supporting fact 0.0 0.0 0.0 0.0 0.0 0.0 2: 2 supporting facts 81.9 0.0 1.0 0.6 0.7 0.3 3: 3 supporting facts 83.1 0.0 6.8 0.7 5.3 4.6 4: 2 argument relations 0.2 0.0 0.0 0.0 0.0 0.0 5: 3 argument relations 1.2 0.3 6.1 0.3 0.6 0.4 6: yes/no questions 51.8 0.0 0.1 0.0 0.0 0.0 7: counting 24.9 3.3 6.6 0.2 0.6 0.5 8: lists/sets 34.1 1.0 2.7 0.2 0.2 0.3 9: simple negation 20.2 0.0 0.0 0.0 0.0 0.0 10: indefinite knowledge 30.1 0.0 0.5 0.2 0.5 0.0 11: basic coreference 10.3 0.0 0.0 0.0 0.3 0.0 12: conjunction 23.4 0.0 0.1 0.1 0.0 0.0 13: compound coreference 6.1 0.0 0.0 0.1 0.0 0.0 14: time reasoning 81.0 0.0 0.0 0.1 0.0 0.0 15: basic deduction 78.7 0.0 0.2 0.0 0.0 0.0 16: basic induction 51.9 0.0 0.2 54.1 0.0 0.3 17: positional reasoning 50.1 24.6 41.8 0.3 0.3 0.1 18: size reasoning 6.8 2.1 8.0 0.1 0.1 0.4 19: path finding 90.3 31.9 75.7 1.2 0.6 0.0 20: agent’s motivations 2.1 0. 0.0 0.0 0.0 0.0 Mean Error (%) 36.4 3.2 7.5 2.8 0.4 0.3 Failed tasks (err. > 5%) 16 2 6 1 1 0 Table 1: Test accuracies on the jointly trained bAbI-10k dataset. MN-S stands for strongly supervised Memory Network, MN-U for end-to-end Memory Network without supervision, and WMN for Working Memory Network. Results for LSTM, MN-U, and MN-S are took from Sukhbaatar et al. (2015). Results for SDNC are took from Rae et al. (2016). WMN† is an ensemble of two Working Memory Networks. QA benchmark composed of 20 different tasks. Each task is designed to test a different reasoning skill, such as deduction, induction, and coreference resolution. Some of the tasks need relational reasoning, for instance, to compare the size of different entities. Each sample is composed of a question, an answer, and a set of facts. There are two versions of the dataset, referring to different dataset sizes: bAbI-1k and bAbI-10k. In this work, we focus on the bAbI-10k version of the dataset which consists of 10, 000 training samples per task. A task is considered solved if a model achieves greater than 95% accuracy. Note that training can be done per-task or joint (by training the model on all tasks at the same time). Some models (Liu and Perez, 2017) have focused in the per-task training performance, including the EntNet model (Henaff et al., 2016) that solves all the tasks in the per-task training version. We choose to focus on the joint training version since we think is more indicative of the generalization properties of the model. A detailed analysis of the dataset can be found in Lee et al. (2015). Model Details To encode the input facts we used a word embedding that projected each word in a sentence into a real vector of size d. We defined d = 30 and used a GRU with 30 units to process each sentence. We used the 30 sentences in the support set that were immediately prior to the question. The question was processed using the same configuration but with a different GRU. We used 8 heads in the Multi-Head attention mechanism. For the transition networks ft, which operates in the output of each hop, we used a two-layer MLP consisting of 15 and 30 hidden units (so the output preserves the memory dimension). We used H = 4 hops (or equivalently, a working memory buffer of size 4). In the reasoning module, we used a 3layer MLP consisting of 128 units in each layer and with ReLU non-linearities for gθ. We omitted the fφ network since we did not observe improvements when using it. The final layer was a linear layer that produced logits for a softmax over the 1006 answer vocabulary. Training Details We trained our model end-to-end with a crossentropy loss function and using the Adam optimizer (Kingma and Ba, 2014). We used a learning rate of ν = 1e−3. We trained the model during 400 epochs. For training, we used a batch size of 32. As in Sukhbaatar et al. (2015) we did not average the loss over a batch. Also, we clipped gradients with norm larger than 40 (Pascanu et al., 2013). For all the dense layers we used ℓ2 regularization with value 1e−3. All weights were initialized using Glorot normal initialization (Glorot and Bengio, 2010). 10% of the training set was heldout to form a validation set that we used to select the architecture and for hyperparameter tunning. In some cases, we found useful to restart training after the 400 epochs with a smaller learning rate of 1e−5 and anneals every 5 epochs by ν/2 until 20 epochs were reached. bAbI-10k Results On the jointly trained bAbI-10k dataset our best model (out of 10 runs) achieves an accuracy of 99.58%. That is a 2.38% improvement over the previous state-of-the-art that was obtained by the Sparse Differential Neural Computer (SDNC) (Rae et al., 2016). The best model of the 10 runs solves almost all tasks of the bAbI-10k dataset (by a 0.3% margin). However, a simple ensemble of the best two models solves all 20 tasks and achieves an almost perfect accuracy of 99.7%. We list the results for each task in Table 1. Other authors have reported high variance in the results, for instance, the authors of the SDNC report a mean accuracy and standard deviation over 15 runs of 93.6 ± 2.5 (with 15.9 ± 1.6 passed tasks). In contrast, our model achieves a mean accuracy of 98.3 ± 1.2 (with 18.6 ± 0.4 passed tasks), which is better and more stable than the average results obtained by the SDNC. The Relation Network solves 18/20 tasks. We achieve even better performance, and with considerably fewer computations, as is explained in Section 4.3. We think that by including the attention mechanism, the relation reasoning module can focus on learning the relation among relevant objects, instead of learning spurious relations among irrelevant objects. For that, the Multi-Head attention mechanism was very helpful. The Effect of the Relational Reasoning Module When compared to the original Memory Network, our model substantially improves the accuracy of tasks 17 (positional reasoning) and 19 (path finding). Both tasks require the analysis of multiple relations (Lee et al., 2015). For instance, the task 19 needs that the model reasons about the relation of different positions of the entities, and in that way find a path to arrive from one to another. The accuracy improves in 75.1% for task 19 and in 41.5% for task 17 when compared with the MemN2N model. Since both tasks require reasoning about relations, we hypothesize that the relational reasoning module of the W-MemNN was of great help to improve the performance on both tasks. The Relation Network, on the other hand, fails in the tasks 2 (2 supporting facts) and 3 (3 supporting facts). Both tasks require handling a significant number of facts, especially in task 3. In those cases, the attention mechanism is crucial to filter out irrelevant facts. 4.2 Visual Question Answering To further study our model we evaluated its performance on a visual question answering dataset. For that, we used the recently proposed NLVR dataset (Suhr et al., 2017). Each sample in the NLVR dataset is composed of an image with three sub-images and a statement. The task consists in judging if the statement is true or false for that image. Evaluating the statement requires reasoning about the sets of objects in the image, comparing objects properties, and reasoning about spatial relations. The dataset is interesting for us for two reasons. First, the statements evaluation requires complex relational reasoning about the objects in the image. Second, unlike the bAbI dataset, the statements are written in natural language. Because of that, each statement displays a range of syntactic and semantic phenomena that are not present in the bAbI dataset. Model details Our model can be easily adapted to deal with visual information. Following the idea from Santoro et al. (2017), instead of processing each input using a recurrent neural network, we use a Convolutional Neural Network (CNN). The CNN takes as input each sub-image and convolved them through convolutional layers. The output of the CNN consists of k feature maps (where k is the number 1007 One tower with one block block at the top | Answer:False / Pred: False At least one square closely touching one box edge | Answer:True / Pred: True Story (2 supporting facts) Support Hop 1 Hop 2 Hop 3 Hop 4 Mary moved to the office. 0.79 0.30 0.15 0.15 Sandra travelled to the bedroom. True 0.02 2.64 2.75 0.39 Daniel dropped the football. 0.03 0.13 0.16 0.41 Sandra left the milk there. True 1.01 0.07 0.16 0.38 Daniel grabbed the football there. 0.08 0.31 0.07 0.27 Question: Where is the milk? Answer: bedroom, Pred: bedroom Story (2 supporting facts) Support Hop 1 Hop 2 Hop 3 Hop 4 Brian is white. 0.46 0.36 0.35 0.89 Bernhard is white. 0.07 0.13 0.19 0.81 Julius is a frog. True 0.16 2.03 0.39 0.26 Julius is white. True 0.09 0.23 2.42 1.32 Greg is a frog. True 1.95 1.60 0.77 0.25 Question: What color is greg? Answer: white, Pred: white Table 2: Examples of visualizations of attention for textual and visual QA. Top: Visualization of attention values for the NLVR dataset. To get more aesthetic figures we applied a gaussian blur to the attention matrix. Bottom: Attention values for the bAbI dataset. In each cell, the sum of the attention for all heads is shown. of kernels in the final convolutional layer) of size d × d. Then, each memory is built from the vector composed by the concatenation of the cells in the same position of each feature map. Consequently, d × d memories of size k are stored in the shortterm storage. The statement is processed using a GRU neural network as in the textual reasoning task. Then, we can proceed using the same architecture for the reasoning and attention module that the one used in the textual QA model. However, for the visual QA task, we used an additive attention mechanism. The additive attention computes the attention weight using a feed-forward neural network applied to the concatenation of the memory vector and statement vector. Results Our model achieves a validation / test accuracy of 65.6%/65.8%. Notably, we achieved a performance comparable to the results of the Module Neural Networks (Andreas et al., 2016) that make use of standard NLP tools to process the statements into structured representations. Unlike the Module Neural Networks, we achieved our results using only raw input statements, allowing the model to learn how to process the textual input by itself. Note that given the more complex nature of the language used in the NLVR dataset we needed to use a larger embedding size and GRU hidden layer than in the bAbI dataset (100 and 128 respectively). That, however, is a nice feature of separating the input from the reasoning and attention component: One way to process more complex language statements is increasing the capacity of the input module. 4.3 From O(n2) to O(n) One of the major limitations of RNs is that they need to process each one of the memories in pairs. To do that, the RN must perform O(n2) forward and backward passes (where n is the number of memories). That becomes quickly prohibitive for a larger number of memories. In contrast, the dependence of the W-MemNN run times on the number of memories is linear. Note, however, that computation times in the W-MemNN depend quadratically on the size of the working memory buffer. Nonetheless, this number is expected to be much smaller than the number of memories. To compare both models we measured the wall-clock time for a forward and backward pass for a single batch of size 32. We performed these experiments on a GPU NVIDIA K80. Figure 2 shows the results. 4.4 Memory Visualizations One nice feature from Memory Networks is that they allow some interpretability of the reasoning procedure by looking at the attention weights. At each hop, the attention weights show which parts of the memory the model found relevant to produce the output. RNs, on the contrary, lack of this feature. Table 2 shows the attention values for visual and textual question answering. 1008 5 10 15 20 25 30 Number of Memories 0 200 400 600 800 Wall Time / Iteration [sec] Relation Network W-MemNN Figure 2: Wall-clock times for a forward and backward pass for a single batch. The batch size used is 32. While for 5 memories the times are comparable, for 30 memories the W-MemNN takes around 50s while the RN takes 930s, a speedup of almost 20×. 5 Conclusion We have proposed a novel Working Memory Network architecture that introduces improved reasoning abilities to the original MemNN model. We demonstrated that by augmenting the MemNN architecture with a Relation Network, the computational complexity of the RN can be reduced, without loss of performance. That opens the opportunity for using RNs in larger problems, something that may be very useful, given the many tasks requiring a significant amount of memories. Although we have used RN as the reasoning module in this work, other options can be tested. It might be interesting to analyze how other reasoning modules can improve different weaknesses of the model. We presented results on the jointly trained bAbI10k dataset, where we achieve a new state-of-theart, with an average error of less than 0.5%. Also, we showed that our model can be easily adapted for visual question answering. Our architecture combines perceptual input processing, short-term memory storage, an attention mechanism, and a reasoning module. While other models have focused on different parts of these components, we think that is important to find ways to combine these different mechanisms if we want to build models capable of complex reasoning. Evidence from cognitive sciences seems to show that all these abilities are needed in order to achieve human-level complex reasoning. Acknowledgments JP was supported by the Scientific and Technological Center of Valpara´ıso (CCTVal) under Fondecyt grant BASAL FB0821. HA was supported through the research project Fondecyt-Conicyt 1170123. The work of HAC was supported by the research project Fondecyt Initiation into Research 11150248. References Jacob Andreas, Marcus Rohrbach, Trevor Darrell, and Dan Klein. 2016. Neural module networks. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. pages 39–48. Alan Baddeley. 1992. Working memory. Science 255(5044):556–559. Alan Baddeley. 2000. The episodic buffer: a new component of working memory? Trends in cognitive sciences 4(11):417–423. Alan D Baddeley and Graham Hitch. 1974. Working memory. In Psychology of learning and motivation, Elsevier, volume 8, pages 47–89. Peter Battaglia, Razvan Pascanu, Matthew Lai, Danilo Jimenez Rezende, et al. 2016. Interaction networks for learning about objects, relations and physics. In Advances in neural information processing systems. pages 4502–4510. Junyoung Chung, Caglar Gulcehre, KyungHyun Cho, and Yoshua Bengio. 2014. Empirical evaluation of gated recurrent neural networks on sequence modeling. arXiv preprint arXiv:1412.3555 . Xavier Glorot and Yoshua Bengio. 2010. Understanding the difficulty of training deep feedforward neural networks. In Proceedings of the Thirteenth International Conference on Artificial Intelligence and Statistics. pages 249–256. Alex Graves, Greg Wayne, and Ivo Danihelka. 2014. Neural turing machines. arXiv preprint arXiv:1410.5401 . Alex Graves, Greg Wayne, Malcolm Reynolds, Tim Harley, Ivo Danihelka, Agnieszka GrabskaBarwi´nska, Sergio G´omez Colmenarejo, Edward Grefenstette, Tiago Ramalho, John Agapiou, et al. 2016. Hybrid computing using a neural network with dynamic external memory. Nature 538(7626):471. Stevan Harnad. 1999. The symbol grounding problem. CoRR cs.AI/9906002. http://arxiv.org/abs/cs.AI/9906002. Mikael Henaff, Jason Weston, Arthur Szlam, Antoine Bordes, and Yann LeCun. 2016. Tracking the world state with recurrent entity networks. arXiv preprint arXiv:1612.03969 . 1009 Diederik P Kingma and Jimmy Ba. 2014. Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980 . Ankit Kumar, Ozan Irsoy, Peter Ondruska, Mohit Iyyer, James Bradbury, Ishaan Gulrajani, Victor Zhong, Romain Paulus, and Richard Socher. 2016. Ask me anything: Dynamic memory networks for natural language processing. In International Conference on Machine Learning. pages 1378–1387. Moontae Lee, Xiaodong He, Wen-tau Yih, Jianfeng Gao, Li Deng, and Paul Smolensky. 2015. Reasoning in vector space: An exploratory study of question answering. arXiv preprint arXiv:1511.06426 . Fei Liu and Julien Perez. 2017. Gated end-to-end memory networks. In Proceedings of the 15th Conference of the European Chapter of the Association for Computational Linguistics: Volume 1, Long Papers. volume 1, pages 1–10. Rasmus Berg Palm, Ulrich Paquet, and Ole Winther. 2017. Recurrent relational networks for complex relational reasoning. CoRR abs/1711.08028. http://arxiv.org/abs/1711.08028. Razvan Pascanu, Tomas Mikolov, and Yoshua Bengio. 2013. On the difficulty of training recurrent neural networks. In International Conference on Machine Learning. pages 1310–1318. Jack Rae, Jonathan J Hunt, Ivo Danihelka, Timothy Harley, Andrew W Senior, Gregory Wayne, Alex Graves, and Tim Lillicrap. 2016. Scaling memoryaugmented neural networks with sparse reads and writes. In Advances in Neural Information Processing Systems. pages 3621–3629. Adam Santoro, David Raposo, David G Barrett, Mateusz Malinowski, Razvan Pascanu, Peter Battaglia, and Tim Lillicrap. 2017. A simple neural network module for relational reasoning. In Advances in neural information processing systems. pages 4974– 4983. Franco Scarselli, Marco Gori, Ah Chung Tsoi, Markus Hagenbuchner, and Gabriele Monfardini. 2009. The graph neural network model. IEEE Transactions on Neural Networks 20(1):61–80. Alane Suhr, Mike Lewis, James Yeh, and Yoav Artzi. 2017. A corpus of natural language for visual reasoning. In Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (Volume 2: Short Papers). Association for Computational Linguistics, pages 217–223. https://doi.org/10.18653/v1/P17-2034. Sainbayar Sukhbaatar, Jason Weston, Rob Fergus, et al. 2015. End-to-end memory networks. In Advances in neural information processing systems. pages 2440–2448. Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N. Gomez, Lukasz Kaiser, and Illia Polosukhin. 2017. Attention is all you need. https://arxiv.org/pdf/1706.03762.pdf. Jason Weston, Antoine Bordes, Sumit Chopra, Alexander M Rush, Bart van Merri¨enboer, Armand Joulin, and Tomas Mikolov. 2015. Towards ai-complete question answering: A set of prerequisite toy tasks. arXiv preprint arXiv:1502.05698 . Jason Weston, Sumit Chopra, and Antoine Bordes. 2014. Memory networks. CoRR abs/1410.3916. http://arxiv.org/abs/1410.3916. Caiming Xiong, Stephen Merity, and Richard Socher. 2016. Dynamic memory networks for visual and textual question answering. In International Conference on Machine Learning. pages 2397–2406.
2018
92
Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Long Papers), pages 1010–1020 Melbourne, Australia, July 15 - 20, 2018. c⃝2018 Association for Computational Linguistics 1010 Reasoning with Sarcasm by Reading In-between Yi Tay†, Luu Anh Tuanψ, Siu Cheung Huiφ, Jian Suδ †[email protected] ψ[email protected] φ[email protected] δ[email protected] †,φSchool of Computer Science and Engineering, Nanyang Technological University ψ,δA*Star, Institute for Infocomm Research, Singapore Abstract Sarcasm is a sophisticated speech act which commonly manifests on social communities such as Twitter and Reddit. The prevalence of sarcasm on the social web is highly disruptive to opinion mining systems due to not only its tendency of polarity flipping but also usage of figurative language. Sarcasm commonly manifests with a contrastive theme either between positive-negative sentiments or between literal-figurative scenarios. In this paper, we revisit the notion of modeling contrast in order to reason with sarcasm. More specifically, we propose an attention-based neural model that looks inbetween instead of across, enabling it to explicitly model contrast and incongruity. We conduct extensive experiments on six benchmark datasets from Twitter, Reddit and the Internet Argument Corpus. Our proposed model not only achieves stateof-the-art performance on all datasets but also enjoys improved interpretability. 1 Introduction Sarcasm, commonly defined as ‘An ironical taunt used to express contempt’, is a challenging NLP problem due to its highly figurative nature. The usage of sarcasm on the social web is prevalent and can be frequently observed in reviews, microblogs (tweets) and online forums. As such, the battle against sarcasm is also regularly cited as one of the key challenges in sentiment analysis and opinion mining applications (Pang et al., 2008). Hence, it is both imperative and intuitive that effective sarcasm detectors can bring about numerous benefits to opinion mining applications. Sarcasm is often associated to several linguistic phenomena such as (1) an explicit contrast between sentiments or (2) disparity between the conveyed emotion and the author’s situation (context). Prior work has considered sarcasm to be a contrast between a positive and negative sentiment (Riloff et al., 2013). Consider the following examples: 1. I absolutely love to be ignored! 2. Yay!!! The best thing to wake up to is my neighbor’s drilling. 3. Perfect movie for people who can’t fall asleep. Given the examples, we make a crucial observation - Sarcasm relies a lot on the semantic relationships (and contrast) between individual words and phrases in a sentence. For instance, the relationships between phrases {love, ignored}, {best, drilling} and {movie, asleep} (in the examples above) richly characterize the nature of sarcasm conveyed, i.e., word pairs tend to be contradictory and more often than not, express a juxtaposition of positive and negative terms. This concept is also explored in (Joshi et al., 2015) in which the authors refer to this phenomena as ‘incongruity’. Hence, it would be useful to capture the relationships between selected word pairs in a sentence, i.e., looking in-between. State-of-the-art sarcasm detection systems mainly rely on deep and sequential neural networks (Ghosh and Veale, 2016; Zhang et al., 2016). In these works, compositional encoders such as gated recurrent units (GRU) (Cho et al., 2014) or long short-term memory (LSTM) (Hochreiter and Schmidhuber, 1997) are often employed, with the input document being parsed one word at a time. This has several shortcomings for the sarcasm detection task. Firstly, there is 1011 no explicit interaction between word pairs, which hampers its ability to explicitly model contrast, incongruity or juxtaposition of situations. Secondly, it is difficult to capture long-range dependencies. In this case, contrastive situations (or sentiments) which are commonplace in sarcastic language may be hard to detect with simple sequential models. To overcome the weaknesses of standard sequential models such as recurrent neural networks, our work is based on the intuition that modeling intra-sentence relationships can not only improve classification performance but also pave the way for more explainable neural sarcasm detection methods. In other words, our key intuition manifests itself in the form of an attention-based neural network. While the key idea of most neural attention mechanisms is to focus on relevant words and sub-phrases, it merely looks across and does not explicitly capture word-word relationships. Hence, it suffers from the same shortcomings as sequential models. In this paper, our aim is to combine the effectiveness of state-of-the-art recurrent models while harnessing the intuition of looking in-between. We propose a multi-dimensional intra-attention recurrent network that models intricate similarities between each word pair in the sentence. In other words, our novel deep learning model aims to capture ‘contrast’ (Riloff et al., 2013) and ‘incongruity’ (Joshi et al., 2015) within end-to-end neural networks. Our model can be thought of selftargeted co-attention (Xiong et al., 2016), which allows our model to not only capture word-word relationships but also long-range dependencies. Finally, we show that our model produces interpretable attention maps which aid in the explainability of model outputs. To the best of our knowledge, our model is the first attention model that can produce explainable results in the sarcasm detection task. Briefly, the prime contributions of this work can be summarized as follows: • We propose a new state-of-the-art method for sarcasm detection. Our proposed model, the Multi-dimensional Intra-Attention Recurrent Network (MIARN) is strongly based on the intuition of compositional learning by leveraging intra-sentence relationships. To the best of our knowledge, none of the existing state-of-the-art models considered exploiting intra-sentence relationships, solely relying on sequential composition. • We conduct extensive experiments on multiple benchmarks from Twitter, Reddit and the Internet Argument Corpus. Our proposed MIARN achieves highly competitive performance on all benchmarks, outperforming existing state-of-the-art models such as GRNN (Zhang et al., 2016) and CNN-LSTM-DNN (Ghosh and Veale, 2016). 2 Related Work Sarcasm is a complex linguistic phenomena that have long fascinated both linguists and NLP researchers. After all, a better computational understanding of this complicated speech act could potentially bring about numerous benefits to existing opinion mining applications. Across the rich history of research on sarcasm, several theories such as the Situational Disparity Theory (Wilson, 2006) and the Negation Theory (Giora, 1995) have emerged. In these theories, a common theme is a motif that is strongly grounded in contrast, whether in sentiment, intention, situation or context. (Riloff et al., 2013) propagates this premise forward, presenting an algorithm strongly based on the intuition that sarcasm arises from a juxtaposition of positive and negative situations. 2.1 Sarcasm Detection Naturally, many works in this area have treated the sarcasm detection task as a standard text classification problem. An extremely comprehensive overview can be found at (Joshi et al., 2017). Feature engineering approaches were highly popular, exploiting a wide diverse range of features such as syntactic patterns (Tsur et al., 2010), sentiment lexicons (Gonz´alez-Ib´anez et al., 2011), ngram (Reyes et al., 2013), word frequency (Barbieri et al., 2014), word shape and pointedness features (Pt´aˇcek et al., 2014), readability and flips (Rajadesingan et al., 2015), etc. Notably, there have been quite a reasonable number of works that propose features based on similarity and contrast. (Hern´andez-Far´ıas et al., 2015) measured the Wordnet based semantic similarity between words. (Joshi et al., 2015) proposed a framework based on explicit and implicit incongruity, utilizing features based on positive-negative patterns. (Joshi et al., 2016) proposed similarity features based on word embeddings. 1012 2.2 Deep Learning for Sarcasm Detection Deep learning based methods have recently garnered considerable interest in many areas of NLP research. In our problem domain, (Zhang et al., 2016) proposed a recurrent-based model with a gated pooling mechanism for sarcasm detection on Twitter. (Ghosh and Veale, 2016) proposed a convolutional long-short-term memory network (CNN-LSTM-DNN) that achieves state-of-the-art performance. While our work focuses on document-only sarcasm detection, several notable works have proposed models that exploit personality information (Ghosh and Veale, 2017) and user context (Amir et al., 2016). Novel methods for sarcasm detection such as gaze / cognitive features (Mishra et al., 2016, 2017) have also been explored. (Peled and Reichart, 2017) proposed a novel framework based on neural machine translation to convert a sequence from sarcastic to non-sarcastic. (Felbo et al., 2017) proposed a layer-wise training scheme that utilizes emoji-based distant supervision for sentiment analysis and sarcasm detection tasks. 2.3 Attention Models for NLP In the context of NLP, the key idea of neural attention is to soft select a sequence of words based on their relative importance to the task at hand. Early innovations in attentional paradigms mainly involve neural machine translation (Luong et al., 2015; Bahdanau et al., 2014) for aligning sequence pairs. Attention is also commonplace in many NLP applications such as sentiment classification (Chen et al., 2016; Yang et al., 2016), aspect-level sentiment analysis (Tay et al., 2018s, 2017b; Chen et al., 2017) and entailment classification (Rockt¨aschel et al., 2015). Co-attention / Bi-Attention (Xiong et al., 2016; Seo et al., 2016) is a form of pairwise attention mechanism that was proposed to model query-document pairs. Intraattention can be interpreted as a self-targetted coattention and is seeing a lot promising results in many recent works (Vaswani et al., 2017; Parikh et al., 2016; Tay et al., 2017a; Shen et al., 2017). The key idea is to model a sequence against itself, learning to attend while capturing long term dependencies and word-word level interactions. To the best of our knowledge, our work is not only the first work that only applies intra-attention to sarcasm detection but also the first attention model for sarcasm detection. 3 Our Proposed Approach In this section, we describe our proposed model. Figure 1 illustrates our overall model architecture. 3.1 Input Encoding Layer Our model accepts a sequence of one-hot encoded vectors as an input. Each one-hot encoded vector corresponds to a single word in the vocabulary. In the input encoding layer, each one-hot vector is converted into a low-dimensional vector representation (word embedding). The word embeddings are parameterized by an embedding layer W ∈ Rn×|V |. As such, the output of this layer is a sequence of word embeddings, i.e., {w1, w2, · · · wℓ} where ℓis a predefined maximum sequence length. 3.2 Multi-dimensional Intra-Attention In this section, we describe our multi-dimensional intra-attention mechanism for sarcasm detection. We first begin by describing the standard single-dimensional intra-attention. The multidimensional adaptation will be introduced later in this section. The key idea behind this layer is to look in-between, i.e., modeling the semantics between each word in the input sequence. We first begin by modeling the relationship of each word pair in the input sequence. A simple way to achieve this is to use a linear1 transformation layer to project the concatenation of each word embedding pair into a scalar score as follows: sij = Wa([wi; wj]) + ba (1) where Wa ∈R2n×1, ba ∈R are the parameters of this layer. [.; .] is the vector concatenation operator and sij is a scalar representing the affinity score between word pairs (wi, wj). We can easily observe that s is a symmetrical matrix of ℓ× ℓdimensions. In order to learn attention vector a, we apply a row-wise max-pooling operator on matrix s. a = softmax(max row s) (2) where a ∈ Rℓis a vector representing the learned intra-attention weights. Then, the vector a is employed to learn weighted representation of {w1, w2 · · · wℓ} as follows: va = ℓ X i=1 wiai (3) 1Early experiments found that adding nonlinearity here may degrade performance. 1013 where v ∈Rn is the intra-attentive representation of the input sequence. While other choices of pooling operators may be also employed (e.g., mean-pooling over max-pooling), the choice of max-pooling is empirically motivated. Intuitively, this attention layer learns to pay attention based on a word’s largest contribution to all words in the sequence. Since our objective is to highlight words that might contribute to the contrastive theories of sarcasm, a more discriminative pooling operator is desirable. Notably, we also mask values of s where i = j such that we do not allow the relationship scores of a word with respect to itself to influence the overall attention weights. Furthermore, our network can be considered as an ‘inner’ adaptation of neural attention, modeling intra-sentence relationships between the raw word representations instead of representations that have been compositionally manipulated. This allows word-to-word similarity to be modeled ‘as it is’ and not be influenced by composition. For example, when using the outputs of a compositional encoder (e.g., LSTM), matching words n and n + 1 might not be meaningful since they would be relatively similar in terms of semantic composition. For relatively short documents (such as tweets), it is also intuitive that attention typically focuses on the last hidden representation. Intuitively, the relationships between two words is often not straightforward. Words are complex and often hold more than one meanings (or word senses). As such, it might be beneficial to model multiple views between two words. This can be modeled by representing the word pair interaction with a vector instead of a scalar. As such, we propose a multi-dimensional adaptation of the intra-attention mechanism. The key idea here is that each word pair is projected down to a lowdimensional vector before we compute the affinity score, which allows it to not only capture one view (one scalar) but also multiple views. A modification to Equation (1) constitutes our MultiDimensional Intra-Attention variant. sij = Wp(ReLU(Wq([wi; wj]) + bq)) + bp (4) where Wq ∈Rn×k, Wp ∈Rk×1, bq ∈Rk, bp ∈R are the parameters of this layer. The final intraattentive representation is then learned with Equation (2) and Equation (3) which we do not repeat here for the sake of brevity. Word Embeddings I absolutely love to ignored be Multi-Dimensional Intra-Attention I absolutely love to be ignored Dense Layer Compositional Representation Softmax / Attention I absolutely love to be ignored Dense Layer Softmax Intra-Attentive Representation LSTM Encoder Figure 1: High level overview of our proposed MIARN architecture. MIARN learns two representations, one based on intra-sentence relationships (intra-attentive) and another based on sequential composition (LSTM). Both views are used for prediction. 3.3 Long Short-Term Memory Encoder While we are able to simply use the learned representation v for prediction, it is clear that v does not encode compositional information and may miss out on important compositional phrases such as ‘not happy’. Clearly, our intra-attention mechanism simply considers a word-by-word interaction and does not model the input document sequentially. As such, it is beneficial to use a separate compositional encoder for this purpose, i.e., learning compositional representations. To this end, we employ the standard Long Short-Term Memory (LSTM) encoder. The output of an LSTM encoder at each time-step can be briefly defined as: hi = LSTM(w, i), ∀i ∈[1, . . . ℓ] (5) where ℓrepresents the maximum length of the sequence and hi ∈Rd is the hidden output of the LSTM encoder at time-step i. d is the size of the hidden units of the LSTM encoder. LSTM encoders are parameterized by gating mechanisms learned via nonlinear transformations. Since 1014 LSTMs are commonplace in standard NLP applications, we omit the technical details for the sake of brevity. Finally, to obtain a compositional representation of the input document, we use vc = hℓ which is the last hidden output of the LSTM encoder. Note that the inputs to the LSTM encoder are the word embeddings right after the input encoding layer and not the output of the intraattention layer. We found that applying an LSTM on the intra-attentively scaled representations do not yield any benefits. 3.4 Prediction Layer The inputs to the final prediction layer are two representations, namely (1) the intra-attentive representation (va ∈Rn) and (2) the compositional representation (vc ∈Rd). This layer learns a joint representation of these two views using a nonlinear projection layer. v = ReLU(Wz([va; vc]) + bz) (6) where Wz ∈R(d+n)×d and bz ∈Rd. Finally, we pass v into a Softmax classification layer. ˆy = Softmax(Wf v + bf) (7) where Wf ∈Rd×2, bf ∈R2 are the parameters of this layer. ˆy ∈R2 is the output layer of our proposed model. 3.5 Optimization and Learning Our network is trained end-to-end, optimizing the standard binary cross-entropy loss function. J = − N X i=1 [yi log ˆyi + (1 −yi) log(1 −ˆyi)] + R (8) where J is the cost function, ˆy is the output of the network, R = ||θ||L2 is the L2 regularization and λ is the weight of the regularizer. 4 Empirical Evaluation In this section, we describe our experimental setup and results. Our experiments were designed to answer the following research questions (RQs). • RQ1 - Does our proposed approach outperform existing state-of-the-art models? • RQ2 - What are the impacts of some of the architectural choices of our model? How much does intra-attention contribute to the model performance? Is the MultiDimensional adaptation better than the Single-Dimensional adaptation? • RQ3 - What can we interpret from the intraattention layers? Does this align with our hypothesis about looking in-between and modeling contrast? 4.1 Datasets We conduct our experiments on six publicly available benchmark datasets which span across three well-known sources. • Tweets - Twitter2 is a microblogging platform which allows users to post statuses of less than 140 characters. We use two collections for sarcasm detection on tweets. More specifically, we use the dataset obtained from (1) (Pt´aˇcek et al., 2014) in which tweets are trained via hashtag based semisupervised learning, i.e., hashtags such as #not, #sarcasm and #irony are marked as sarcastic tweets and (2) (Riloff et al., 2013) in which Tweets are hand annotated and manually checked for sarcasm. For both datasets, we retrieve. Tweets using the Twitter API using the provided tweet IDs. • Reddit - Reddit3 is a highly popular social forum and community. Similar to Tweets, sarcastic posts are obtained via the tag ‘/s’ which are marked by the authors themselves. We use two Reddit datasets which are obtained from the subreddits /r/movies and /r/technology respectively. Datasets are subsets from (Khodak et al., 2017). • Debates - We use two datasets4 from the Internet Argument Corpus (IAC) (Lukin and Walker, 2017) which have been hand annotated for sarcasm. This dataset, unlike the first two, is mainly concerned with long text and provides a diverse comparison from the other datasets. The IAC corpus was designed for research on political debates on online forums. We use the V1 and V2 versions of the sarcasm corpus which are denoted as IAC-V1 and IAC-V2 respectively. The statistics of the datasets used in our experiments is reported in Table 1. 2https://twitter.com 3https://reddit.com 4https://nlds.soe.ucsc.edu/sarcasm1 1015 Dataset Train Dev Test Avg ℓ Tweets (Pt´aˇcek et al.) 44017 5521 5467 18 Tweets (Riloff et al.) 1369 195 390 14 Reddit (/r/movies) 5895 655 1638 12 Reddit (/r/technology) 16146 1793 4571 11 Debates IAC-V1 3716 464 466 54 Debates IAC-V2 1549 193 193 64 Table 1: Statistics of datasets used in our experiments. 4.2 Compared Methods We compare our proposed model with the following algorithms. • NBOW is a simple neural bag-of-words baseline that sums all the word embeddings and passes the summed vector into a simple logistic regression layer. • CNN is a vanilla Convolutional Neural Network with max-pooling. CNNs are considered as compositional encoders that capture n-gram features by parameterized sliding windows. The filter width is 3 and number of filters f = 100. • LSTM is a vanilla Long Short-Term Memory Network. The size of the LSTM cell is set to d = 100. • ATT-LSTM (Attention-based LSTM) is a LSTM model with a neural attention mechanism applied to all the LSTM hidden outputs. We use a similar adaptation to (Yang et al., 2016), albeit only at the document-level. • GRNN (Gated Recurrent Neural Network) is a Bidirectional Gated Recurrent Unit (GRU) model that was proposed for sarcasm detection by (Zhang et al., 2016). GRNN uses a gated pooling mechanism to aggregate the hidden representations from a standard BiGRU model. Since we only compare on document-level sarcasm detection, we do not use the variant of GRNN that exploits user context. • CNN-LSTM-DNN (Convolutional LSTM + Deep Neural Network), proposed by (Ghosh and Veale, 2016), is the state-of-theart model for sarcasm detection. This model is a combination of a CNN, LSTM and Deep Neural Network via stacking. It stacks two layers of 1D convolution with 2 LSTM layers. The output passes through a deep neural network (DNN) for prediction. Both CNN-LSTM-DNN (Ghosh and Veale, 2016) and GRNN (Zhang et al., 2016) are state-ofthe-art models for document-level sarcasm detection and have outperformed numerous neural and non-neural baselines. In particular, both works have well surpassed feature-based models (Support Vector Machines, etc.), as such we omit comparisons for the sake of brevity and focus comparisons with recent neural models instead. Moreover, since our work focuses only on document-level sarcasm detection, we do not compare against models that use external information such as user profiles, context, personality information (Ghosh and Veale, 2017) or emoji-based distant supervision (Felbo et al., 2017). For our model, we report results on both multi-dimensional and single-dimensional intraattention. The two models are named as MIARN and SIARN respectively. 4.3 Implementation Details and Metrics We adopt standard the evaluation metrics for the sarcasm detection task, i.e., macro-averaged F1 and accuracy score. Additionally, we also report precision and recall scores. All deep learning models are implemented using TensorFlow (Abadi et al., 2015) and optimized on a NVIDIA GTX1070 GPU. Text is preprocessed with NLTK5’s Tweet tokenizer. Words that only appear once in the entire corpus are removed and marked with the UNK token. Document lengths are truncated at 40, 20, 80 tokens for Twitter, Reddit and Debates dataset respectively. Mentions of other users on the Twitter dataset are replaced by ‘@USER’. Documents with URLs (i.e., containing ‘http’) are removed from the corpus. Documents with less than 5 tokens are also removed. The learning optimizer used is the RMSProp with an initial learning rate of 0.001. The L2 regularization is set to 10−8. We initialize the word embedding layer with GloVe (Pennington et al., 2014). We use the GloVe model trained on 2B Tweets for the Tweets and Reddit dataset. The Glove model trained on Common Crawl is used for the Debates corpus. The size of the word embeddings is fixed at d = 100 and are fine-tuned during training. In all experiments, we use a development set to select the best hyperparameters. Each model is trained for a total of 30 epochs and the model is saved each time the performance 5https://nltk.org 1016 Tweets (Pt´aˇcek et al., 2014) Tweets (Riloff et al., 2013) Model P R F1 Acc P R F1 Acc NBOW 80.02 79.06 79.43 80.39 71.28 62.37 64.13 79.23 Vanilla CNN 82.13 79.67 80.39 81.65 71.04 67.13 68.55 79.48 Vanilla LSTM 84.62 83.21 83.67 84.50 67.33 67.20 67.27 76.27 Attention LSTM 84.16 85.10 83.67 84.40 68.78 68.63 68.71 77.69 GRNN (Zhang et al.) 84.06 83.02 83.43 84.20 66.32 64.74 65.40 76.41 CNN-LSTM-DNN (Ghosh and Veale) 84.06 83.45 83.74 84.39 69.76 66.62 67.81 78.72 SIARN (this paper) 85.02 84.27 84.59 85.24 73.82 73.26 73.24 82.31 MIARN (this paper) 86.13 85.79 86.00 86.47 73.34 68.34 70.10 80.77 Table 2: Experimental Results on Tweets datasets. Best result in is boldface and second best is underlined. Best performing baseline is in italics. Reddit (/r/movies) Reddit (/r/technology) Model P R F1 Acc P R F1 Acc NBOW 67.33 66.56 66.82 67.52 65.45 65.62 65.52 66.55 Vanilla CNN 65.97 65.97 65.97 66.24 65.88 62.90 62.85 66.80 Vanilla LSTM 67.57 67.67 67.32 67.34 66.94 67.22 67.03 67.92 Attention LSTM 68.11 67.87 67.94 68.37 68.20 68.78 67.44 67.22 GRNN (Zhang et al.) 66.16 66.16 66.16 66.42 66.56 66.73 66.66 67.65 CNN-LSTM-DNN (Ghosh and Veale) 68.27 67.87 67.95 68.50 66.14 66.73 65.74 66.00 SIARN (this paper) 69.59 69.48 69.52 69.84 69.35 70.05 69.22 69.57 MIARN (this paper) 69.68 69.37 69.54 69.90 68.97 69.30 69.09 69.91 Table 3: Experimental results on Reddit datasets. Best result in is boldface and second best is underlined. Best performing baseline is in italics. Debates (IAC-V1) Debates (IAC-V2) Model P R F1 Acc P R F1 Acc NBOW 57.17 57.03 57.00 57.51 66.01 66.03 66.02 66.09 Vanilla CNN 58.21 58.00 57.95 58.55 68.45 68.18 68.21 68.56 Vanilla LSTM 54.87 54.89 54.84 54.92 68.30 63.96 60.78 62.66 Attention LSTM 58.98 57.93 57.23 59.07 70.04 69.62 69.63 69.96 GRNN (Zhang et al.) 56.21 56.21 55.96 55.96 62.26 61.87 61.21 61.37 CNN-LSTM-DNN (Ghosh and Veale) 55.50 54.60 53.31 55.96 64.31 64.33 64.31 64.38 SIARN (this paper) 63.94 63.45 62.52 62.69 72.17 71.81 71.85 72.10 MIARN (this paper) 63.88 63.71 63.18 63.21 72.92 72.93 72.75 72.75 Table 4: Experimental results on Debates datasets. Best result in is boldface and second best is underlined. Best performing baseline is in italics. on the development set is topped. The batch size is tuned amongst {128, 256, 512} for all datasets. The only exception is the Tweets dataset from (Riloff et al., 2013), in which a batch size of 16 is used in lieu of the much smaller dataset size. For fair comparison, all models have the same hidden representation size and are set to 100 for both recurrent and convolutional based models (i.e., number of filters). For MIARN, the size of intraattention hidden representation is tuned amongst {4, 8, 10, 20}. 4.4 Experimental Results Table 2, Table 3 and Table 4 reports a performance comparison of all benchmarked models on the Tweets, Reddit and Debates datasets respectively. We observe that our proposed SIARN and MIARN models achieve the best results across all six datasets. The relative improvement differs across domain and datasets. On the Tweets dataset from (Pt´aˇcek et al., 2014), MIARN achieves about ≈2% −2.2% improvement in terms of F1 and accuracy score when compared against the best baseline. On the other Tweets dataset from (Riloff et al., 2013), the performance gain of our proposed model is larger, i.e., 3% −5% improvement on average over most baselines. Our proposed SIARN and MIARN models achieve very competitive performance on the Reddit datasets, with an average of ≈2% margin improvement over the best baselines. Notably, the baselines we compare against are extremely competitive state-of-the-art neural network models. This further reinforces the effectiveness of our proposed approach. Additionally, the performance improvement on Debates (long text) is significantly larger than short 1017 text (i.e., Twitter and Reddit). For example, MIARN outperforms GRNN and CNN-LSTM-DNN by ≈8% −10% on both IAC-V1 and IAC-V2. At this note, we can safely put RQ1 to rest. Overall, the performance of MIARN is often marginally better than SIARN (with some exceptions, e.g., Tweets dataset from (Riloff et al., 2013)). We believe that this is attributed to the fact that more complex word-word relationships can be learned by using multi-dimensional values instead of single-dimensional scalars. The performance brought by our additional intra-attentive representations can be further observed by comparing against the vanilla LSTM model. Clearly, removing the intra-attention network reverts our model to the standard LSTM. The performance improvements are encouraging, leading to almost 10% improvement in terms of F1 and accuracy. On datasets with short text, the performance improvement is often a modest ≈2% −3% (RQ2). Notably, our proposed models also perform much better on long text, which can be attributed to the intra-attentive representations explicitly modeling long range dependencies. Intuitively, this is problematic for models that only capture sequential dependencies (e.g., word by word). Finally, the relative performance of competitor methods are as expected. NBOW performs the worse, since it is just a naive bag-of-words model without any compositional or sequential information. On short text, LSTMs are overall better than CNNs. However, this trend is reversed on long text (i.e., Debates) since the LSTM model may be overburdened by overly long sequences. On short text, we also found that attention (or the gated pooling mechanism from GRNN) did not really help make any significant improvements over the vanilla LSTM model and a qualitative explanation to why this is so is deferred to the next section. However, attention helps for long text (such as debates), resulting in Attention LSTMs becoming the strongest baseline on the Debates datasets. However, our proposed intra-attentive model is both effective on short text and long text, outperforming Attention LSTMs consistently on all datasets. 4.5 In-depth Model Analysis In this section, we present an in-depth analysis of our proposed model. More specifically, we not only aim to showcase the interpretability of our model but also explain how representations are formed. More specifically, we test our model (trained on Tweets dataset by (Pt´aˇcek et al., 2014)) on two examples. We extract the attention maps of three models, namely MIARN, Attention LSTM (ATT-LSTM) and applying Attention mechanism directly on the word embeddings without using a LSTM encoder (ATT-RAW). Table 5 shows the visualization of the attention maps. Label Model Sentence True MIARN I totally love being ignored !! ATT-LSTM I totally love being ignored !! ATT-RAW I totally love being ignored !! False MIARN Being ignored sucks big time ATT-LSTM Being ignored sucks big time ATT-RAW Being ignored sucks big time Table 5: Visualization of normalized attention weights on three different attention models (Best viewed in color). The intensity denotes the strength of the attention weight on the word. In the first example (true label), we notice that the attention maps of MIARN are focusing on the words ‘love’ and ‘ignored’. This is in concert with our intuition about modeling contrast and incongruity. On the other hand, both ATT-LSTM and ATT-RAW learn very different attention maps. As for ATT-LSTM, the attention weight is focused completely on the last representation - the token ‘!!’. Additionally, we also observed that this is true for many examples in the Tweets and Reddit dataset. We believe that this is the reason why standard neural attention does not help as what the attention mechanism is learning is to select the last representation (i.e., vanilla LSTM). Without the LSTM encoder, the attention weights focus on ‘love’ but not ‘ignored’. This fails to capture any concept of contrast or incongruity. Next, we consider the false labeled example. This time, the attention maps of MIARN are not as distinct as before. However, they focus on sentiment-bearing words, composing the words ‘ignored sucks’ to form the majority of the intraattentive representation. This time, passing the vector made up of ‘ignored sucks’ allows the subsequent layers to recognize that there is no contrasting situation or sentiment. Similarly, ATTLSTM focuses on the last word time which is totally non-interpretable. On the other hand, ATTRAW focuses on relatively non-meaningful words such as ‘big’. Overall, we analyzed two cases (positive and negative labels) and found that MIARN produces 1018 very explainable attention maps. In general, we found that MIARN is able to identify contrast and incongruity in sentences, allowing our model to better detect sarcasm. This is facilitated by modeling intra-sentence relationships. Notably, the standard vanilla attention is not explainable or interpretable. 5 Conclusion Based on the intuition of intra-sentence similarity (i.e., looking in-between), we proposed a new neural network architecture for sarcasm detection. Our network incorporates a multi-dimensional intra-attention component that learns an intraattentive representation of the sentence, enabling it to detect contrastive sentiment, situations and incongruity. Extensive experiments over six public benchmarks confirm the empirical effectiveness of our proposed model. Our proposed MIARN model outperforms strong state-of-the-art baselines such as GRNN and CNN-LSTM-DNN. Analysis of the intra-attention scores shows that our model learns highly interpretable attention weights, paving the way for more explainable neural sarcasm detection methods. References Mart´ın Abadi, Ashish Agarwal, Paul Barham, Eugene Brevdo, Zhifeng Chen, Craig Citro, Greg S. Corrado, Andy Davis, Jeffrey Dean, Matthieu Devin, Sanjay Ghemawat, Ian Goodfellow, Andrew Harp, Geoffrey Irving, Michael Isard, Yangqing Jia, Rafal Jozefowicz, Lukasz Kaiser, Manjunath Kudlur, Josh Levenberg, Dan Man´e, Rajat Monga, Sherry Moore, Derek Murray, Chris Olah, Mike Schuster, Jonathon Shlens, Benoit Steiner, Ilya Sutskever, Kunal Talwar, Paul Tucker, Vincent Vanhoucke, Vijay Vasudevan, Fernanda Vi´egas, Oriol Vinyals, Pete Warden, Martin Wattenberg, Martin Wicke, Yuan Yu, and Xiaoqiang Zheng. 2015. TensorFlow: Large-scale machine learning on heterogeneous systems. Software available from tensorflow.org. Silvio Amir, Byron C Wallace, Hao Lyu, and Paula Carvalho M´ario J Silva. 2016. Modelling context with user embeddings for sarcasm detection in social media. arXiv preprint arXiv:1607.00976 . Dzmitry Bahdanau, Kyunghyun Cho, and Yoshua Bengio. 2014. Neural machine translation by jointly learning to align and translate. arXiv preprint arXiv:1409.0473 . Francesco Barbieri, Horacio Saggion, and Francesco Ronzano. 2014. Modelling sarcasm in twitter, a novel approach. In Proceedings of the 5th Workshop on Computational Approaches to Subjectivity, Sentiment and Social Media Analysis. pages 50–58. Huimin Chen, Maosong Sun, Cunchao Tu, Yankai Lin, and Zhiyuan Liu. 2016. Neural sentiment classification with user and product attention. In Proceedings of the 2016 Conference on Empirical Methods in Natural Language Processing. pages 1650–1659. Peng Chen, Zhongqian Sun, Lidong Bing, and Wei Yang. 2017. Recurrent attention network on memory for aspect sentiment analysis. In Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing. pages 452–461. Kyunghyun Cho, Bart Van Merri¨enboer, Caglar Gulcehre, Dzmitry Bahdanau, Fethi Bougares, Holger Schwenk, and Yoshua Bengio. 2014. Learning phrase representations using rnn encoder-decoder for statistical machine translation. arXiv preprint arXiv:1406.1078 . Bjarke Felbo, Alan Mislove, Anders Søgaard, Iyad Rahwan, and Sune Lehmann. 2017. Using millions of emoji occurrences to learn any-domain representations for detecting sentiment, emotion and sarcasm. arXiv preprint arXiv:1708.00524 . Aniruddha Ghosh and Tony Veale. 2016. Fracking sarcasm using neural network. In Proceedings of the 7th Workshop on Computational Approaches to Subjectivity, Sentiment and Social Media Analysis, WASSA@NAACL-HLT 2016, June 16, 2016, San Diego, California, USA. pages 161– 169. http://aclweb.org/anthology/W/W16/W160425.pdf. Aniruddha Ghosh and Tony Veale. 2017. Magnets for sarcasm: Making sarcasm detection timely, contextual and very personal. In Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing, EMNLP 2017, Copenhagen, Denmark, September 9-11, 2017. pages 482–491. Rachel Giora. 1995. On irony and negation. Discourse processes 19(2):239–264. Roberto Gonz´alez-Ib´anez, Smaranda Muresan, and Nina Wacholder. 2011. Identifying sarcasm in twitter: a closer look. In Proceedings of the 49th Annual Meeting of the Association for Computational Linguistics: Human Language Technologies: Short Papers-Volume 2. Association for Computational Linguistics, pages 581–586. Iraz´u Hern´andez-Far´ıas, Jos´e-Miguel Bened´ı, and Paolo Rosso. 2015. Applying basic features from sentiment analysis for automatic irony detection. In Iberian Conference on Pattern Recognition and Image Analysis. Springer, pages 337–344. Sepp Hochreiter and J¨urgen Schmidhuber. 1997. Long short-term memory. Neural computation 9(8):1735–1780. 1019 Aditya Joshi, Pushpak Bhattacharyya, and Mark J Carman. 2017. Automatic sarcasm detection: A survey. ACM Computing Surveys (CSUR) 50(5):73. Aditya Joshi, Vinita Sharma, and Pushpak Bhattacharyya. 2015. Harnessing context incongruity for sarcasm detection. In Proceedings of the 53rd Annual Meeting of the Association for Computational Linguistics and the 7th International Joint Conference on Natural Language Processing (Volume 2: Short Papers). volume 2, pages 757–762. Aditya Joshi, Vaibhav Tripathi, Kevin Patel, Pushpak Bhattacharyya, and Mark Carman. 2016. Are word embedding-based features useful for sarcasm detection? arXiv preprint arXiv:1610.00883 . Mikhail Khodak, Nikunj Saunshi, and Kiran Vodrahalli. 2017. A large self-annotated corpus for sarcasm. arXiv preprint arXiv:1704.05579 . Stephanie Lukin and Marilyn Walker. 2017. Really? well. apparently bootstrapping improves the performance of sarcasm and nastiness classifiers for online dialogue. arXiv preprint arXiv:1708.08572 . Minh-Thang Luong, Hieu Pham, and Christopher D Manning. 2015. Effective approaches to attentionbased neural machine translation. arXiv preprint arXiv:1508.04025 . Abhijit Mishra, Kuntal Dey, and Pushpak Bhattacharyya. 2017. Learning cognitive features from gaze data for sentiment and sarcasm classification using convolutional neural network. In Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics, ACL 2017, Vancouver, Canada, July 30 - August 4, Volume 1: Long Papers. pages 377–387. https://doi.org/10.18653/v1/P171035. Abhijit Mishra, Diptesh Kanojia, Seema Nagar, Kuntal Dey, and Pushpak Bhattacharyya. 2016. Harnessing cognitive features for sarcasm detection. In Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics, ACL 2016, August 7-12, 2016, Berlin, Germany, Volume 1: Long Papers. http://aclweb.org/anthology/P/P16/P161104.pdf. Bo Pang, Lillian Lee, et al. 2008. Opinion mining and sentiment analysis. Foundations and Trends R⃝in Information Retrieval 2(1–2):1–135. Ankur P. Parikh, Oscar T¨ackstr¨om, Dipanjan Das, and Jakob Uszkoreit. 2016. A decomposable attention model for natural language inference. In Proceedings of the 2016 Conference on Empirical Methods in Natural Language Processing, EMNLP 2016, Austin, Texas, USA, November 1-4, 2016. pages 2249–2255. Lotem Peled and Roi Reichart. 2017. Sarcasm SIGN: interpreting sarcasm with sentiment based monolingual machine translation. In Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics, ACL 2017, Vancouver, Canada, July 30 - August 4, Volume 1: Long Papers. pages 1690–1700. https://doi.org/10.18653/v1/P17-1155. Jeffrey Pennington, Richard Socher, and Christopher D. Manning. 2014. Glove: Global vectors for word representation. In Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing, EMNLP 2014, October 25-29, 2014, Doha, Qatar, A meeting of SIGDAT, a Special Interest Group of the ACL. pages 1532–1543. Tom´aˇs Pt´aˇcek, Ivan Habernal, and Jun Hong. 2014. Sarcasm detection on czech and english twitter. In Proceedings of COLING 2014, the 25th International Conference on Computational Linguistics: Technical Papers. pages 213–223. Ashwin Rajadesingan, Reza Zafarani, and Huan Liu. 2015. Sarcasm detection on twitter: A behavioral modeling approach. In Proceedings of the Eighth ACM International Conference on Web Search and Data Mining. ACM, pages 97–106. Antonio Reyes, Paolo Rosso, and Tony Veale. 2013. A multidimensional approach for detecting irony in twitter. Language resources and evaluation 47(1):239–268. Ellen Riloff, Ashequl Qadir, Prafulla Surve, Lalindra De Silva, Nathan Gilbert, and Ruihong Huang. 2013. Sarcasm as contrast between a positive sentiment and negative situation. In Proceedings of the 2013 Conference on Empirical Methods in Natural Language Processing, EMNLP 2013, 1821 October 2013, Grand Hyatt Seattle, Seattle, Washington, USA, A meeting of SIGDAT, a Special Interest Group of the ACL. pages 704–714. http://aclweb.org/anthology/D/D13/D13-1066.pdf. Tim Rockt¨aschel, Edward Grefenstette, Karl Moritz Hermann, Tom´aˇs Koˇcisk`y, and Phil Blunsom. 2015. Reasoning about entailment with neural attention. arXiv preprint arXiv:1509.06664 . Minjoon Seo, Aniruddha Kembhavi, Ali Farhadi, and Hannaneh Hajishirzi. 2016. Bidirectional attention flow for machine comprehension. arXiv preprint arXiv:1611.01603 . Tao Shen, Tianyi Zhou, Guodong Long, Jing Jiang, Shirui Pan, and Chengqi Zhang. 2017. Disan: Directional self-attention network for rnn/cnnfree language understanding. arXiv preprint arXiv:1709.04696 . Yi Tay, Anh Tuan Luu, and Siu Cheung Hui. 2018s. Learning to attend via word-aspect associative fusion for aspect-based sentiment analysis. In In Proceedings of the AAAI 2018, 5956-5963. Yi Tay, Luu Anh Tuan, and Siu Cheung Hui. 2017a. A compare-propagate architecture with alignment factorization for natural language inference. arXiv preprint arXiv:1801.00102 . 1020 Yi Tay, Luu Anh Tuan, and Siu Cheung Hui. 2017b. Dyadic memory networks for aspectbased sentiment analysis. In Proceedings of the 2017 ACM on Conference on Information and Knowledge Management, CIKM 2017, Singapore, November 06 - 10, 2017. pages 107–116. https://doi.org/10.1145/3132847.3132936. Oren Tsur, Dmitry Davidov, and Ari Rappoport. 2010. Icwsm-a great catchy name: Semi-supervised recognition of sarcastic sentences in online product reviews. Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, Łukasz Kaiser, and Illia Polosukhin. 2017. Attention is all you need. In Advances in Neural Information Processing Systems. pages 6000–6010. Deirdre Wilson. 2006. The pragmatics of verbal irony: Echo or pretence? Lingua 116(10):1722–1743. Caiming Xiong, Victor Zhong, and Richard Socher. 2016. Dynamic coattention networks for question answering. CoRR abs/1611.01604. Zichao Yang, Diyi Yang, Chris Dyer, Xiaodong He, Alexander J Smola, and Eduard H Hovy. 2016. Hierarchical attention networks for document classification. Meishan Zhang, Yue Zhang, and Guohong Fu. 2016. Tweet sarcasm detection using deep neural network. In COLING 2016, 26th International Conference on Computational Linguistics, Proceedings of the Conference: Technical Papers, December 11-16, 2016, Osaka, Japan. pages 2449–2460. http://aclweb.org/anthology/C/C16/C16-1231.pdf.
2018
93
Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Long Papers), pages 1021–1032 Melbourne, Australia, July 15 - 20, 2018. c⃝2018 Association for Computational Linguistics 1021 Adversarial Contrastive Estimation Avishek Joey Bose1,2,∗† Huan Ling1,2,∗† Yanshuai Cao1,∗ 1Borealis AI 2University of Toronto {joey.bose,huan.ling}@mail.utoronto.ca {yanshuai.cao}@borealisai.com Abstract Learning by contrasting positive and negative samples is a general strategy adopted by many methods. Noise contrastive estimation (NCE) for word embeddings and translating embeddings for knowledge graphs are examples in NLP employing this approach. In this work, we view contrastive learning as an abstraction of all such methods and augment the negative sampler into a mixture distribution containing an adversarially learned sampler. The resulting adaptive sampler finds harder negative examples, which forces the main model to learn a better representation of the data. We evaluate our proposal on learning word embeddings, order embeddings and knowledge graph embeddings and observe both faster convergence and improved results on multiple metrics. 1 Introduction Many models learn by contrasting losses on observed positive examples with those on some fictitious negative examples, trying to decrease some score on positive ones while increasing it on negative ones. There are multiple reasons why such contrastive learning approach is needed. Computational tractability is one. For instance, instead of using softmax to predict a word for learning word embeddings, noise contrastive estimation (NCE) (Dyer, 2014; Mnih and Teh, 2012) can be used in skip-gram or CBOW word embedding models (Gutmann and Hyv¨arinen, 2012; Mikolov et al., 2013; Mnih and Kavukcuoglu, 2013; Vaswani et al., 2013). Another reason is ∗authors contributed equally †Work done while author was an intern at Borealis AI modeling need, as certain assumptions are best expressed as some score or energy in margin based or un-normalized probability models (Smith and Eisner, 2005). For example, modeling entity relations as translations or variants thereof in a vector space naturally leads to a distance-based score to be minimized for observed entity-relation-entity triplets (Bordes et al., 2013). Given a scoring function, the gradient of the model’s parameters on observed positive examples can be readily computed, but the negative phase requires a design decision on how to sample data. In noise contrastive estimation for word embeddings, a negative example is formed by replacing a component of a positive pair by randomly selecting a sampled word from the vocabulary, resulting in a fictitious word-context pair which would be unlikely to actually exist in the dataset. This negative sampling by corruption approach is also used in learning knowledge graph embeddings (Bordes et al., 2013; Lin et al., 2015; Ji et al., 2015; Wang et al., 2014; Trouillon et al., 2016; Yang et al., 2014; Dettmers et al., 2017), order embeddings (Vendrov et al., 2016), caption generation (Dai and Lin, 2017), etc. Typically the corruption distribution is the same for all inputs like in skip-gram or CBOW NCE, rather than being a conditional distribution that takes into account information about the input sample under consideration. Furthermore, the corruption process usually only encodes a human prior as to what constitutes a hard negative sample, rather than being learned from data. For these two reasons, the simple fixed corruption process often yields only easy negative examples. Easy negatives are sub-optimal for learning discriminative representation as they do not force the model to find critical characteristics of observed positive data, which has been independently discovered in applications outside NLP previously (Shrivastava 1022 et al., 2016). Even if hard negatives are occasionally reached, the infrequency means slow convergence. Designing a more sophisticated corruption process could be fruitful, but requires costly trialand-error by a human expert. In this work, we propose to augment the simple corruption noise process in various embedding models with an adversarially learned conditional distribution, forming a mixture negative sampler that adapts to the underlying data and the embedding model training progress. The resulting method is referred to as adversarial contrastive estimation (ACE). The adaptive conditional model engages in a minimax game with the primary embedding model, much like in Generative Adversarial Networks (GANs) (Goodfellow et al., 2014a), where a discriminator net (D), tries to distinguish samples produced by a generator (G) from real data (Goodfellow et al., 2014b). In ACE, the main model learns to distinguish between a real positive example and a negative sample selected by the mixture of a fixed NCE sampler and an adversarial generator. The main model and the generator takes alternating turns to update their parameters. In fact, our method can be viewed as a conditional GAN (Mirza and Osindero, 2014) on discrete inputs, with a mixture generator consisting of a learned and a fixed distribution, with additional techniques introduced to achieve stable and convergent training of embedding models. In our proposed ACE approach, the conditional sampler finds harder negatives than NCE, while being able to gracefully fall back to NCE whenever the generator cannot find hard negatives. We demonstrate the efficacy and generality of the proposed method on three different learning tasks, word embeddings (Mikolov et al., 2013), order embeddings (Vendrov et al., 2016) and knowledge graph embeddings (Ji et al., 2015). 2 Method 2.1 Background: contrastive learning In the most general form, our method applies to supervised learning problems with a contrastive objective of the following form: L(ω) = Ep(x+,y+,y−) lω(x+, y+, y−) (1) where lω(x+, y+, y−) captures both the model with parameters ω and the loss that scores a positive tuple (x+, y+) against a negative one (x+, y−). Ep(x+,y+,y−)(.) denotes expectation with respect to some joint distribution over positive and negative samples. Furthermore, by the law of total expectation, and the fact that given x+, the negative sampling is not dependent on the positive label, i.e. p(y+, y−|x+) = p(y+|x+)p(y−|x+), Eq. 1 can be re-written as Ep(x+)[Ep(y+|x+)p(y−|x+) lω(x+, y+, y−)] (2) Separable loss In the case where the loss decomposes into a sum of scores on positive and negative tuples such as lω(x+, y+, y−) = sω (x+, y+)−˜sω (x+, y−), then Expression. 2 becomes Ep+(x)[Ep+(y|x) sω (x, y) −Ep−(y|x) ˜sω (x, y)] (3) where we moved the + and −to p for notational brevity. Learning by stochastic gradient descent aims to adjust ω to pushing down sω (x, y) on samples from p+ while pushing up ˜sω (x, y) on samples from p−. Note that for generality, the scoring function for negative samples, denoted by ˜sω, could be slightly different from sω. For instance, ˜s could contain a margin as in the case of Order Embeddings in Sec. 4.2. Non separable loss Eq. 1 is the general form that we would like to consider because for certain problems, the loss function cannot be separated into sums of terms containing only positive (x+, y+) and terms with negatives (x+, y−). An example of such a nonseparable loss is the triplet ranking loss (Schroff et al., 2015): lω = max(0, η + sω (x+, y+) − sω (x+, y−)), which does not decompose due to the rectification. Noise contrastive estimation The typical NCE approach in tasks such as word embeddings (Mikolov et al., 2013), order embeddings (Vendrov et al., 2016), and knowledge graph embeddings can be viewed as a special case of Eq. 2 by taking p(y−|x+) to be some unconditional pnce(y). This leads to efficient computation during training, however, pnce(y) sacrifices the sampling efficiency of learning as the negatives produced using a fixed distribution are not tailored toward x+, and as a result are not necessarily hard negative examples. Thus, the model is not forced to discover discriminative representation of observed positive 1023 data. As training progresses, more and more negative examples are correctly learned, the probability of drawing a hard negative example diminishes further, causing slow convergence. 2.2 Adversarial mixture noise To remedy the above mentioned problem of a fixed unconditional negative sampler, we propose to augment it into a mixture one, λpnce(y) + (1 − λ)gθ(y|x), where gθ is a conditional distribution with a learnable parameter θ and λ is a hyperparameter. The objective in Expression. 2 can then be written as (conditioned on x for notational brevity): L(ω, θ; x) = λ Ep(y+|x)pnce(y−) lω(x, y+, y−) + (1 −λ) Ep(y+|x)gθ(y−|x) lω(x, y+, y−) (4) We learn (ω, θ) in a GAN-style minimax game: min ω max θ V (ω, θ) = min ω max θ Ep+(x) L(ω, θ; x) (5) The embedding model behind lω(x, y+, y−) is similar to the discriminator in (conditional) GAN (or critic in Wasserstein (Arjovsky et al., 2017) or Energy-based GAN (Zhao et al., 2016), while gθ(y|x) acts as the generator. Henceforth, we will use the term discriminator (D) and embedding model interchangeably, and refer to gθ as the generator. 2.3 Learning the generator There is one important distinction to typical GAN: gθ(y|x) defines a categorical distribution over possible y values, and samples are drawn accordingly; in contrast to typical GAN over continuous data space such as images, where samples are generated by an implicit generative model that warps noise vectors into data points. Due to the discrete sampling step, gθ cannot learn by receiving gradient through the discriminator. One possible solution is to use the Gumbel-softmax reparametrization trick (Jang et al., 2016; Maddison et al., 2016), which gives a differentiable approximation. However, this differentiability comes at the cost of drawing N Gumbel samples per each categorical sample, where N is the number of categories. For word embeddings, N is the vocabulary size, and for knowledge graph embeddings, N is the number of entities, both leading to infeasible computational requirements. Instead, we use the REINFORCE (Williams, 1992) gradient estimator for ∇θL(θ, x): (1−λ) E  −lω(x, y+, y−)∇θ log(gθ(y−|x))  (6) where the expectation E is with respect to p(y+, y−|x) = p(y+|x)gθ(y−|x), and the discriminator loss lω(x, y+, y−) acts as the reward. With a separable loss, the (conditional) value function of the minimax game is: L(ω, θ; x) = Ep+(y|x) sω (x, y) −Epnce(y) ˜sω (x, y) −Egθ(y|x) ˜sω (x, y) (7) and only the last term depends on the generator parameter ω. Hence, with a separable loss, the reward is −˜s(x+, y−). This reduction does not happen with a non-separable loss, and we have to use lω(x, y+, y−). 2.4 Entropy and training stability GAN training can suffer from instability and degeneracy where the generator probability mass collapses to a few modes or points. Much work has been done to stabilize GAN training in the continuous case (Arjovsky et al., 2017; Gulrajani et al., 2017; Cao et al., 2018). In ACE, if the generator gθ probability mass collapses to a few candidates, then after the discriminator successfully learns about these negatives, gθ cannot adapt to select new hard negatives, because the REINFORCE gradient estimator Eq. 6 relies on gθ being able to explore other candidates during sampling. Therefore, if the gθ probability mass collapses, instead of leading to oscillation as in typical GAN, the min-max game in ACE reaches an equilibrium where the discriminator wins and gθ can no longer adapt, then ACE falls back to NCE since the negative sampler has another mixture component from NCE. This behavior of gracefully falling back to NCE is more desirable than the alternative of stalled training if p−(y|x) does not have a simple pnce mixture component. However, we would still like to avoid such collapse, as the adversarial samples provide greater learning signals than NCE samples. To this end, we propose to use a regularizer to encourage the categorical distribution gθ(y|x) to have high entropy. In order to make the the regularizer interpretable and its hyperparameters easy to tune, we design the following form: Rent(x) = min(0, c −H(gθ(y|x))) (8) 1024 where H(gθ(y|x)) is the entropy of the categorical distribution gθ(y|x), and c = log(k) is the entropy of a uniform distribution over k choices, and k is a hyper-parameter. Intuitively, Rent expresses the prior that the generator should spread its mass over more than k choices for each x. 2.5 Handling false negatives During negative sampling, p−(y|x) could actually produce y that forms a positive pair that exists in the training set, i.e., a false negative. This possibility exists in NCE already, but since pnce is not adaptive, the probability of sampling a false negative is low. Hence in NCE, the score on this false negative (true observation) pair is pushed up less in the negative term than in the positive term. However, with the adaptive sampler, gω(y|x), false negatives become a much more severe issue. gω(y|x) can learn to concentrate its mass on a few false negatives, significantly canceling the learning of those observations in the positive phase. The entropy regularization reduces this problem as it forces the generator to spread its mass, hence reducing the chance of a false negative. To further alleviate this problem, whenever computationally feasible, we apply an additional two-step technique. First, we maintain a hash map of the training data in memory, and use it to efficiently detect if a negative sample (x+, y−) is an actual observation. If so, its contribution to the loss is given a zero weight in ω learning step. Second, to upate θ in the generator learning step, the reward for false negative samples are replaced by a large penalty, so that the REINFORCE gradient update would steer gθ away from those samples. The second step is needed to prevent null computation where gθ learns to sample false negatives which are subsequently ignored by the discriminator update for ω. 2.6 Variance Reduction The basic REINFORCE gradient estimator is poised with high variance, so in practice one often needs to apply variance reduction techniques. The most basic form of variance reduction is to subtract a baseline from the reward. As long as the baseline is not a function of actions (i.e., samples y−being drawn), the REINFORCE gradient estimator remains unbiased. More advanced gradient estimators exist that also reduce variance (Grathwohl et al., 2017; Tucker et al., 2017; Liu et al., 2018), but for simplicity we use the self-critical baseline method (Rennie et al., 2016), where the baseline is b(x) = lω(y+, y⋆, x), or b(x) = −˜sω(y⋆, x) in the separable loss case, and y⋆= argmaxigθ(yi|x). In other words, the baseline is the reward of the most likely sample according to the generator. 2.7 Improving exploration in gθ by leveraging NCE samples In Sec. 2.4 we touched on the need for sufficient exploration in gθ. It is possible to also leverage negative samples from NCE to help the generator learn. This is essentially off-policy exploration in reinforcement learning since NCE samples are not drawn according to gθ(y|x). The generator learning can use importance re-weighting to leverage those samples. The resulting REINFORCE gradient estimator is basically the same as Eq. 6 except that the rewards are reweighted by gθ(y−|x)/pnce(y−), and the expectation is with respect to p(y+|x)pnce(y−). This additional offpolicy learning term provides gradient information for generator learning if gθ(y−|x) is not zero, meaning that for it to be effective in helping exploration, the generator cannot be collapsed at the first place. Hence, in practice, this term is only used to further help on top of the entropy regularization, but it does not replace it. 3 Related Work Smith and Eisner (2005) proposed contrastive estimation as a way for unsupervised learning of log-linear models by taking implicit evidence from user-defined neighborhoods around observed datapoints. Gutmann and Hyv¨arinen (2010) introduced NCE as an alternative to the hierarchical softmax. In the works of Mnih and Teh (2012) and Mnih and Kavukcuoglu (2013), NCE is applied to log-bilinear models and Vaswani et al. (2013) applied NCE to neural probabilistic language models (Yoshua et al., 2003). Compared to these previous NCE methods that rely on simple fixed sampling heuristics, ACE uses an adaptive sampler that produces harder negatives. In the domain of max-margin estimation for structured prediction (Taskar et al., 2005), loss augmented MAP inference plays the role of finding hard negatives (the hardest). However, this inference is only tractable in a limited class of models such structured SVM (Tsochantaridis et al., 2005). Compared to those models that use exact 1025 maximization to find the hardest negative configuration each time, the generator in ACE can be viewed as learning an approximate amortized inference network. Concurrently to this work, Tu and Gimpel (2018) proposes a very similar framework, using a learned inference network for Structured prediction energy networks (SPEN) (Belanger and McCallum, 2016). Concurrent with our work, there have been other interests in applying the GAN to NLP problems (Fedus et al., 2018; Wang et al., 2018; Cai and Wang, 2017). Knowledge graph models naturally lend to a GAN setup, and has been the subject of study in Wang et al. (2018) and Cai and Wang (2017). These two concurrent works are most closely related to one of the three tasks on which we study ACE in this work. Besides a more general formulation that applies to problems beyond those considered in Wang et al. (2018) and Cai and Wang (2017), the techniques introduced in our work on handling false negatives and entropy regularization lead to improved experimental results as shown in Sec. 5.4. 4 Application of ACE on three tasks 4.1 Word Embeddings Word embeddings learn a vector representation of words from co-occurrences in a text corpus. NCE casts this learning problem as a binary classification where the model tries to distinguish positive word and context pairs, from negative noise samples composed of word and false context pairs. The NCE objective in Skip-gram (Mikolov et al., 2013) for word embeddings is a separable loss of the form: L = − X wt∈V [log p(y = 1|wt, w+ c ) + K X c=1 log p(y = 0|wt, w− c )] (9) Here, w+ c is sampled from the set of true contexts and w− c ∼Q is sampled k times from a fixed noise distribution. Mikolov et al. (2013) introduced a further simplification of NCE, called “Negative Sampling” (Dyer, 2014). With respect to our ACE framework, the difference between NCE and Negative Sampling is inconsequential, so we continue the discussion using NCE. A drawback of this sampling scheme is that it favors more common words as context. Another issue is that the negative context words are sampled in the same way, rather than tailored toward the actual target word. To apply ACE to this problem we first define the value function for the minimax game, V (D, G), as follows: V (D, G) = Ep+(wc)[log D(wc, wt)] −Epnce(wc)[−log(1 −D(wc, wt))] −Egθ(wc|wt)[−log(1 −D(wc, wt))] (10) with D = p(y = 1|wt, wc) and G = gθ(wc|wt). Implementation details For our experiments, we train all our models on a single pass of the May 2017 dump of the English Wikipedia with lowercased unigrams. The vocabulary size is restricted to the top 150k most frequent words when training from scratch while for finetuning we use the same vocabulary as Pennington et al. (2014), which is 400k of the most frequent words. We use 5 NCE samples for each positive sample and 1 adversarial sample in a window size of 10 and the same positive subsampling scheme proposed by Mikolov et al. (2013). Learning for both G and D uses Adam (Kingma and Ba, 2014) optimizer with its default parameters. Our conditional discriminator is modeled using the Skip-Gram architecture, which is a two layer neural network with a linear mapping between the layers. The generator network consists of an embedding layer followed by two small hidden layers, followed by an output softmax layer. The first layer of the generator shares its weights with the second embedding layer in the discriminator network, which we find really speeds up convergence as the generator does not have to relearn its own set of embeddings. The difference between the discriminator and generator is that a sigmoid nonlinearity is used after the second layer in the discriminator, while in the generator, a softmax layer is used to define a categorical distribution over negative word candidates. We find that controlling the generator entropy is critical for finetuning experiments as otherwise the generator collapses to its favorite negative sample. The word embeddings are taken to be the first dense matrix in the discriminator. 4.2 Order Embeddings Hypernym Prediction As introduced in Vendrov et al. (2016), ordered representations over hierarchy can be learned by 1026 order embeddings. An example task for such ordered representation is hypernym prediction. A hypernym pair is a pair of concepts where the first concept is a specialization or an instance of the second. For completeness, we briefly describe order embeddings, then analyze ACE on the hypernym prediction task. In order embeddings, each entity is represented by a vector in RN, the score for a positive ordered pair of entities (x, y) is defined by sω(x, y) = ||max(0, y −x)||2 and, score for a negative ordered pair (x+, y−) is defined by ˜sω(x+, y−) = max{0, η −s(x+, y−)}, where is η is the margin. Let f(u) be the embedding function which takes an entity as input and outputs en embedding vector. We define P as a set of positive pairs and N as negative pairs, the separable loss function for order embedding task is defined by: L= X (u,v)∈P sω(f(u), f(v)))+ X (u,v)∈N ˜s(f(u), f(v)) (11) Implementation details Our generator for this task is just a linear fully connected softmax layer, taking an embedding vector from discriminator as input and outputting a categorical distribution over the entity set. For the discriminator, we inherit all model setting from Vendrov et al. (2016): we use 50 dimensions hidden state and bash size 1000, a learning rate of 0.01 and the Adam optimizer. For the generator, we use a batch size of 1000, a learning rate 0.01 and the Adam optimizer. We apply weight decay with rate 0.1 and entropy loss regularization as described in Sec. 2.4. We handle false negative as described in Sec. 2.5. After cross validation, variance reduction and leveraging NCE samples does not greatly affect the order embedding task. 4.3 Knowledge Graph Embeddings Knowledge graphs contain entity and relation data of the form (head entity, relation, tail entity), and the goal is to learn from observed positive entity relations and predict missing links (a.k.a. link prediction). There have been many works on knowledge graph embeddings, e.g. TransE (Bordes et al., 2013), TransR (Lin et al., 2015), TransH (Wang et al., 2014), TransD (Ji et al., 2015), Complex (Trouillon et al., 2016), DistMult (Yang et al., 2014) and ConvE (Dettmers et al., 2017). Many of them use a contrastive learning objective. Here we take TransD as an example, and modify its noise contrastive learning to ACE, and demonstrate significant improvement in sample efficiency and link prediction results. Implementation details Let a positive entity-relation-entity triplet be denoted by ξ+ = (h+, r+, t+), and a negative triplet could either have its head or tail be a negative sample, i.e. ξ−= (h−, r+, t+) or ξ−= (h+, r+, t−). In either case, the general formulation in Sec. 2.1 still applies. The non-separable loss function takes on the form: l = max(0, η + sω(ξ+) −sω(ξ−)) (12) The scoring rule is: s = ∥h⊥+ r −t⊥∥ (13) where r is the embedding vector for r, and h⊥is projection of the embedding of h onto the space of r by h⊥= h + rph⊤ p h, where rp and hp are projection parameters of the model. t⊥is defined in a similar way through parameters t, tp and rp. The form of the generator gθ(t−|r+, h+) is chosen to be fθ(h⊥, h⊥+ r), where fθ is a feedforward neural net that concatenates its two input arguments, then propagates through two hidden layers, followed by a final softmax output layer. As a function of (r+, h+), gθ shares parameter with the discriminator, as the inputs to fθ are the embedding vectors. During generator learning, only θ is updated and the TransD model embedding parameters are frozen. 5 Experiments We evaluate ACE with experiments on word embeddings, order embeddings, and knowledge graph embeddings tasks. In short, whenever the original learning objective is contrastive (all tasks except Glove fine-tuning) our results consistently show that ACE improves over NCE. In some cases, we include additional comparisons to the state-of-art results on the task to put the significance of such improvements in context: the generic ACE can often make a reasonable baseline competitive with SOTA methods that are optimized for the task. For word embeddings, we evaluate models trained from scratch as well as fine-tuned Glove models (Pennington et al., 2014) on word similarity tasks that consist of computing the similarity 1027 Figure 1: Left: Order embedding Accuracy plot. Right: Order embedding discriminator Loss plot on NCE sampled negative pairs and positive pairs. Figure 2: loss curve on NCE negative pairs and ACE negative pairs. Left: without entropy and weight decay. Right: with entropy and weight decay Figure 3: Left: Rare Word, Right: WS353 similarity scores during the first epoch of training. Figure 4: Training from scratch losses on the Discriminator between word pairs where the ground truth is an average of human scores. We choose the Rare word dataset (Luong et al., 2013) and WordSim353 (Finkelstein et al., 2001) by virtue of our hypothesis that ACE learns better representations for both rare and frequent words. We also qualitatively evaluate ACE word embeddings by inspecting the nearest neighbors of selected words. For the hypernym prediction task, following Vendrov et al. (2016), hypernym pairs are created from the WordNet hierarchy’s transitive closure. We use the released random development split and test split from Vendrov et al. (2016), which both contain 4000 edges. For knowledge graph embeddings, we use TransD (Ji et al., 2015) as our base model, and perform ablation study to analyze the behavior of ACE with various add-on features, and confirm that entropy regularization is crucial for good performance in ACE. We also obtain link prediction results that are competitive or superior to the stateof-arts on the WN18 dataset (Bordes et al., 2014). 5.1 Training Word Embeddings from scratch In this experiment, we empirically observe that training word embeddings using ACE converges significantly faster than NCE after one epoch. As shown in Fig. 3 both ACE (a mixture of pnce and gθ) and just gθ (denoted by ADV) significantly outperforms the NCE baseline, with an absolute improvement of 73.1% and 58.5% respectively on RW score. We note similar results on WordSim353 dataset where ACE and ADV outperforms NCE by 40.4% and 45.7%. We also evaluate our model qualitatively by inspecting the nearest neighbors of selected words in Table. 1. We first present the five nearest neighbors to each word to show that both NCE and ACE models learn sensible embeddings. We then show that ACE embeddings have much better semantic relevance in a larger neighborhood (nearest neighbor 45-50). 5.2 Finetuning Word Embeddings We take off-the-shelf pre-trained Glove embeddings which were trained using 6 billion tokens (Pennington et al., 2014) and fine-tune them using our algorithm. It is interesting to note that the original Glove objective does not fit into the contrastive learning framework, but nonetheless we find that they benefit from ACE. In fact, we observe that training such that 75% of the words appear as positive contexts is sufficient to beat the largest dimensionality pre-trained Glove model on word similarity tasks. We evaluate our performance on the Rare Word and WordSim353 data. As can be seen from our results in Table 2, ACE on RW is not always better and for the 100d and 300d Glove embeddings is marginally worse. However, on WordSim353 ACE does considerably better across the board to the point where 50d Glove embeddings outperform the 300d baseline Glove model. 5.3 Hypernym Prediction As shown in Table 3, with ACE training, our method achieves a 1.5% improvement on accu1028 Queen King Computer Man Woman Skip-Gram NCE Top 5 princess prince computers woman girl king queen computing boy man empress kings software girl prostitute pxqueen emperor microcomputer stranger person monarch monarch mainframe person divorcee Skip-Gram NCE Top 45-50 sambiria eraric hypercard angiomata suitor phongsri mumbere neurotechnology someone nymphomaniac safrit empress lgp bespectacled barmaid mcelvoy saxonvm pcs hero redheaded tsarina pretender keystroke clown jew Skip-Gram ACE Top 5 princess prince software woman girl prince vi computers girl herself elizabeth kings applications tells man duke duke computing dead lover consort iii hardware boy tells Skip-Gram ACE Top 45-50 baron earl files kid aunt abbey holy information told maid throne cardinal device revenge wife marie aragon design magic lady victoria princes compatible angry bride Table 1: Top 5 Nearest Neighbors of Words followed by Neighbors 45-50 for different Models. RW WS353 Skipgram Only NCE baseline 18.90 31.35 Skipgram + Only ADV 29.96 58.05 Skipgram + ACE 32.71 55.00 Glove-50 (Recomputed based on(Pennington et al., 2014)) 34.02 49.51 Glove-100 (Recomputed based on(Pennington et al., 2014)) 36.64 52.76 Glove-300 (Recomputed based on(Pennington et al., 2014)) 41.18 60.12 Glove-50 + ACE 35.60 60.46 Glove-100 + ACE 36.51 63.29 Glove-300 + ACE 40.57 66.50 Table 2: Spearman score (ρ ∗100) on RW and WS353 Datasets. We trained a skipgram model from scratch under various settings for only 1 epoch on wikipedia. For finetuned models we recomputed the scores based on the publicly available 6B tokens Glove models and we finetuned until roughly 75% of the vocabulary was seen. racy over Vendrov et al. (2016) without tunning any of the discriminator’s hyperparameters. We further report training curve in Fig. 1, we report loss curve on randomly sampled pairs. We stress that in the ACE model, we train random pairs and generator generated pairs jointly, as shown in Fig. 2, hard negatives help the order embedding model converges faster. 5.4 Ablation Study and Improving TransD To analyze different aspects of ACE, we perform an ablation study on the knowledge graph embedding task. As described in Sec. 4.3, the base Method Accuracy (%) order-embeddings 90.6 order-embeddings + Our ACE 92.0 Table 3: Order Embedding Performance model (discriminator) we apply ACE to is TransD (Ji et al., 2015). Fig. 5 shows validation performance as training progresses. All variants of ACE converges to better results than base NCE. Among ACE variants, all methods that include entropy regularization significantly outperform without entropy regularization. Without the self critical baseline variance reduction, learning could progress faster at the beginning but the final performance suffers slightly. The best performance is obtained without the additional off-policy learning of the generator. Table. 4 shows the final test results on WN18 link prediction task. It is interesting to note that ACE improves MRR score more significantly than hit@10. As MRR is a lot more sensitive to the top rankings, i.e., how the correct configuration ranks among the competitive alternatives, this is consistent with the fact that ACE samples hard negatives and forces the base model to learn a more discriminative representation of the positive examples. 1029 Figure 5: Ablation study: measuring validation Mean Reciprocal Rank (MRR) on WN18 dataset as training progresses. MRR hit@10 ACE(Ent+SC) 0.792 0.945 ACE(Ent+SC+IW) 0.768 0.949 NCE TransD (ours) 0.527 0.947 NCE TransD ((Ji et al., 2015)) 0.925 KBGAN(DISTMULT) ((Cai and Wang, 2017)) 0.772 0.948 KBGAN(COMPLEX) ((Cai and Wang, 2017)) 0.779 0.948 Wang et al. ((Wang et al., 2018)) 0.93 COMPLEX ((Trouillon et al., 2016)) 0.941 0.947 Table 4: WN18 experiments: the first portion of the table contains results where the base model is TransD, the last separated line is the COMPLEX embedding model (Trouillon et al., 2016), which achieves the SOTA on this dataset. Among all TransD based models (the best results in this group is underlined), ACE improves over basic NCE and another GAN based approach KBGAN. The gap on MRR is likely due to the difference between TransD and COMPLEX models. 5.5 Hard Negative Analysis To better understand the effect of the adversarial samples proposed by the generator we plot the discriminator loss on both pnce and gθ samples. In this context, a harder sample means a higher loss assigned by the discriminator. Fig. 4 shows that discriminator loss for the word embedding task on gθ samples are always higher than on pnce samples, confirming that the generator is indeed sampling harder negatives. For Hypernym Prediction task, Fig.2 shows discriminator loss on negative pairs sampled from NCE and ACE respectively. The higher the loss the harder the negative pair is. As indicated in the left plot, loss on the ACE negative terms collapses faster than on the NCE negatives. After adding entropy regularization and weight decay, the generator works as expected. 6 Limitations When the generator softmax is large, the current implementation of ACE training is computationally expensive. Although ACE converges faster per iteration, it may converge more slowly on wall-clock time depending on the cost of the softmax. However, embeddings are typically used as pre-trained building blocks for subsequent tasks. Thus, their learning is usually the pre-computation step for the more complex downstream models and spending more time is justified, especially with GPU acceleration. We believe that the computational cost could potentially be reduced via some existing techniques such as the “augment and reduce” variational inference of (Ruiz et al., 2018), adaptive softmax (Grave et al., 2016), or the “sparsely-gated” softmax of Shazeer et al. (2017), but leave that to future work. Another limitation is on the theoretical front. As noted in Goodfellow (2014), GAN learning does not implement maximum likelihood estimation (MLE), while NCE has MLE as an asymptotic limit. To the best of our knowledge, more distant connections between GAN and MLE training are not known, and tools for analyzing the equilibrium of a min-max game where players are parametrized by deep neural nets are currently not available to the best of our knowledge. 7 Conclusion In this paper, we propose Adversarial Contrastive Estimation as a general technique for improving supervised learning problems that learn by contrasting observed and fictitious samples. Specifically, we use a generator network in a conditional GAN like setting to propose hard negative examples for our discriminator model. We find that a mixture distribution of randomly sampling negative examples along with an adaptive negative sampler leads to improved performances on a variety of embedding tasks. We validate our hypothesis that hard negative examples are critical to optimal learning and can be proposed via our ACE framework. Finally, we find that controlling the entropy of the generator through a regularization term and properly handling false negatives is crucial for successful training. 1030 References Martin Arjovsky, Soumith Chintala, and L´eon Bottou. 2017. Wasserstein GAN. arXiv preprint arXiv:1701.07875. David Belanger and Andrew McCallum. 2016. Structured prediction energy networks. In International Conference on Machine Learning, pages 983–992. Antoine Bordes, Xavier Glorot, Jason Weston, and Yoshua Bengio. 2014. A semantic matching energy function for learning with multi-relational data. Machine Learning, 94(2):233–259. Antoine Bordes, Nicolas Usunier, Alberto GarciaDuran, Jason Weston, and Oksana Yakhnenko. 2013. Translating embeddings for modeling multirelational data. In Advances in neural information processing systems, pages 2787–2795. Liwei Cai and William Yang Wang. 2017. Kbgan: Adversarial learning for knowledge graph embeddings. arXiv preprint arXiv:1711.04071. Yanshuai Cao, Gavin Weiguang Ding, Kry Yik-Chau Lui, and Ruitong Huang. 2018. Improving GAN training via binarized representation entropy (BRE) regularization. In International Conference on Learning Representations. Bo Dai and Dahua Lin. 2017. Contrastive learning for image captioning. In Advances in Neural Information Processing Systems, pages 898–907. Tim Dettmers, Pasquale Minervini, Pontus Stenetorp, and Sebastian Riedel. 2017. Convolutional 2d knowledge graph embeddings. arXiv preprint arXiv:1707.01476. Chris Dyer. 2014. Notes on noise contrastive estimation and negative sampling. arXiv preprint arXiv:1410.8251. William Fedus, Ian Goodfellow, and Andrew M Dai. 2018. MaskGAN: Better text generation via filling in the . arXiv preprint arXiv:1801.07736. Lev Finkelstein, Evgeniy Gabrilovich, Yossi Matias, Ehud Rivlin, Zach Solan, Gadi Wolfman, and Eytan Ruppin. 2001. Placing search in context: The concept revisited. In Proceedings of the 10th international conference on World Wide Web, pages 406– 414. ACM. Ian Goodfellow, Jean Pouget-Abadie, Mehdi Mirza, Bing Xu, David Warde-Farley, Sherjil Ozair, Aaron Courville, and Yoshua Bengio. 2014a. Generative adversarial nets. In Z. Ghahramani, M. Welling, C. Cortes, N. D. Lawrence, and K. Q. Weinberger, editors, Advances in Neural Information Processing Systems 27, pages 2672–2680. Ian Goodfellow, Jean Pouget-Abadie, Mehdi Mirza, Bing Xu, David Warde-Farley, Sherjil Ozair, Aaron Courville, and Yoshua Bengio. 2014b. Generative adversarial nets. In Advances in neural information processing systems, pages 2672–2680. Ian J Goodfellow. 2014. On distinguishability criteria for estimating generative models. arXiv preprint arXiv:1412.6515. Will Grathwohl, Dami Choi, Yuhuai Wu, Geoff Roeder, and David Duvenaud. 2017. Backpropagation through the void: Optimizing control variates for black-box gradient estimation. arXiv preprint arXiv:1711.00123. Edouard Grave, Armand Joulin, Moustapha Ciss´e, David Grangier, and Herv´e J´egou. 2016. Efficient softmax approximation for GPUs. arXiv preprint arXiv:1609.04309. Ishaan Gulrajani, Faruk Ahmed, Martin Arjovsky, Vincent Dumoulin, and Aaron C Courville. 2017. Improved training of wasserstein gans. In Advances in Neural Information Processing Systems, pages 5769–5779. Michael Gutmann and Aapo Hyv¨arinen. 2010. Noisecontrastive estimation: A new estimation principle for unnormalized statistical models. In Proceedings of the Thirteenth International Conference on Artificial Intelligence and Statistics, pages 297–304. Michael U Gutmann and Aapo Hyv¨arinen. 2012. Noise-contrastive estimation of unnormalized statistical models, with applications to natural image statistics. Journal of Machine Learning Research, 13(Feb):307–361. Eric Jang, Shixiang Gu, and Ben Poole. 2016. Categorical reparameterization with gumbel-softmax. arXiv preprint arXiv:1611.01144. Guoliang Ji, Shizhu He, Liheng Xu, Kang Liu, and Jun Zhao. 2015. Knowledge graph embedding via dynamic mapping matrix. In Proceedings of the 53rd Annual Meeting of the Association for Computational Linguistics and the 7th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), volume 1, pages 687–696. Diederik P Kingma and Jimmy Ba. 2014. Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980. Yankai Lin, Zhiyuan Liu, Maosong Sun, Yang Liu, and Xuan Zhu. 2015. Learning entity and relation embeddings for knowledge graph completion. In AAAI, volume 15, pages 2181–2187. Hao Liu, Yihao Feng, Yi Mao, Dengyong Zhou, Jian Peng, and Qiang Liu. 2018. Action-dependent control variates for policy optimization via stein identity. In International Conference on Learning Representations. Thang Luong, Richard Socher, and Christopher D Manning. 2013. Better word representations with recursive neural networks for morphology. In CoNLL, pages 104–113. 1031 Chris J Maddison, Andriy Mnih, and Yee Whye Teh. 2016. The concrete distribution: A continuous relaxation of discrete random variables. arXiv preprint arXiv:1611.00712. Tomas Mikolov, Ilya Sutskever, Kai Chen, Greg S Corrado, and Jeff Dean. 2013. Distributed representations of words and phrases and their compositionality. In Advances in neural information processing systems, pages 3111–3119. M. Mirza and S. Osindero. 2014. Conditional Generative Adversarial Nets. ArXiv e-prints. Andriy Mnih and Koray Kavukcuoglu. 2013. Learning word embeddings efficiently with noise-contrastive estimation. In Advances in neural information processing systems, pages 2265–2273. Andriy Mnih and Yee Whye Teh. 2012. A fast and simple algorithm for training neural probabilistic language models. arXiv preprint arXiv:1206.6426. Jeffrey Pennington, Richard Socher, and Christopher Manning. 2014. Glove: Global vectors for word representation. In Proceedings of the 2014 conference on empirical methods in natural language processing (EMNLP), pages 1532–1543. Steven J Rennie, Etienne Marcheret, Youssef Mroueh, Jarret Ross, and Vaibhava Goel. 2016. Self-critical sequence training for image captioning. arXiv preprint arXiv:1612.00563. Francisco JR Ruiz, Michalis K Titsias, Adji B Dieng, and David M Blei. 2018. Augment and reduce: Stochastic inference for large categorical distributions. arXiv preprint arXiv:1802.04220. Florian Schroff, Dmitry Kalenichenko, and James Philbin. 2015. Facenet: A unified embedding for face recognition and clustering. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 815–823. Noam Shazeer, Azalia Mirhoseini, Krzysztof Maziarz, Andy Davis, Quoc Le, Geoffrey Hinton, and Jeff Dean. 2017. Outrageously large neural networks: The sparsely-gated mixture-of-experts layer. arXiv preprint arXiv:1701.06538. Abhinav Shrivastava, Abhinav Gupta, and Ross Girshick. 2016. Training region-based object detectors with online hard example mining. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 761–769. Noah A Smith and Jason Eisner. 2005. Contrastive estimation: Training log-linear models on unlabeled data. In Proceedings of the 43rd Annual Meeting on Association for Computational Linguistics, pages 354–362. Association for Computational Linguistics. Ben Taskar, Vassil Chatalbashev, Daphne Koller, and Carlos Guestrin. 2005. Learning structured prediction models: A large margin approach. In Proceedings of the 22nd international conference on Machine learning, pages 896–903. ACM. Th´eo Trouillon, Johannes Welbl, Sebastian Riedel, ´Eric Gaussier, and Guillaume Bouchard. 2016. Complex embeddings for simple link prediction. In International Conference on Machine Learning, pages 2071–2080. Ioannis Tsochantaridis, Thorsten Joachims, Thomas Hofmann, and Yasemin Altun. 2005. Large margin methods for structured and interdependent output variables. Journal of machine learning research, 6(Sep):1453–1484. Lifu Tu and Kevin Gimpel. 2018. Learning approximate inference networks for structured prediction. In International Conference on Learning Representations. George Tucker, Andriy Mnih, Chris J Maddison, John Lawson, and Jascha Sohl-Dickstein. 2017. Rebar: Low-variance, unbiased gradient estimates for discrete latent variable models. In I. Guyon, U. V. Luxburg, S. Bengio, H. Wallach, R. Fergus, S. Vishwanathan, and R. Garnett, editors, Advances in Neural Information Processing Systems 30, pages 2627– 2636. Curran Associates, Inc. Ashish Vaswani, Yinggong Zhao, Victoria Fossum, and David Chiang. 2013. Decoding with largescale neural language models improves translation. In Proceedings of the 2013 Conference on Empirical Methods in Natural Language Processing, pages 1387–1392. Ivan Vendrov, Ryan Kiros, Sanja Fidler, and Raquel Urtasun. 2016. Order-embeddings of images and language. In International Conference on Learning Representations. Peifeng Wang, Shuangyin Li, and Rong Pan. 2018. Incorporating GAN for negative sampling in knowledge representation learning. In The Thirty-Second AAAI Conference on Artificial Intelligence (AAAI18). Zhen Wang, Jianwen Zhang, Jianlin Feng, and Zheng Chen. 2014. Knowledge graph embedding by translating on hyperplanes. In Proceedings of the TwentyEighth AAAI Conference on Artificial Intelligence, pages 1112–1119. AAAI Press. Ronald J Williams. 1992. Simple statistical gradientfollowing algorithms for connectionist reinforcement learning. Machine learning, 8(3-4):229–256. Bishan Yang, Wen-tau Yih, Xiaodong He, Jianfeng Gao, and Li Deng. 2014. Embedding entities and relations for learning and inference in knowledge bases. arXiv preprint arXiv:1412.6575. 1032 Bengio Yoshua, Ducharme Rejean, Vincent Pascal, and Jauvin Christian. 2003. A neural probabilistic language model. Journal of Machine Learning Research. Junbo Zhao, Michael Mathieu, and Yann LeCun. 2016. Energy-based generative adversarial network. arXiv preprint arXiv:1609.03126.
2018
94
Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Long Papers), pages 1033–1043 Melbourne, Australia, July 15 - 20, 2018. c⃝2018 Association for Computational Linguistics 1033 Adaptive Scaling for Sparse Detection in Information Extraction Hongyu Lin1,2, Yaojie Lu1,2, Xianpei Han1, Le Sun1 1State Key Laboratory of Computer Science Institute of Software, Chinese Academy of Sciences, Beijing, China 2University of Chinese Academy of Sciences, Beijing, China {hongyu2016,yaojie2017,xianpei,sunle}@iscas.ac.cn Abstract This paper focuses on detection tasks in information extraction, where positive instances are sparsely distributed and models are usually evaluated using F-measure on positive classes. These characteristics often result in deficient performance of neural network based detection models. In this paper, we propose adaptive scaling, an algorithm which can handle the positive sparsity problem and directly optimize over F-measure via dynamic costsensitive learning. To this end, we borrow the idea of marginal utility from economics and propose a theoretical framework for instance importance measuring without introducing any additional hyperparameters. Experiments show that our algorithm leads to a more effective and stable training of neural network based detection models. 1 Introduction Detection problems, aiming to identify occurrences of specific kinds of information (e.g., events, relations, or entities) in documents, are fundamental and widespread in information extraction (IE). For instance, an event detection (Walker et al., 2006) system may want to detect triggers for “Attack” events, such as “shot” in sentence “He was shot”. In relation detection (Hendrickx et al., 2009), we may want to identify all instances of a specific relation, such as “Jane joined Google” for “Employment” relation. Recently, a number of researches have employed neural network models to solve detection problems, and have achieved significant improvement in many tasks, such as event detection (Chen et al., 2015; Nguyen and Grishman, 2015), relation Classification Detection Target Instances All instances Sparse positive instances Evaluation Accuracy or F-measure on all classes F-measure on only positive classes Typical Tasks Text Classification, Sentiment Classification Event Detection, Relation Detection Table 1: Comparison between standard classification tasks and detection problems. detection (Zeng et al., 2014; Santos et al., 2015) and named entity recognition (Huang et al., 2015; Chiu and Nichols, 2015; Lample et al., 2016). These methods usually regard detection problems as standard classification tasks, with several positive classes for targets to detect and one negative class for irrelevant (background) instances. For example, an event detection model will identify event triggers in sentence “He was shot” by classifying word “shot” into positive class “Attack”, and classifying all other words into the negative class “NIL”. To optimize classifiers, cross-entropy loss function is commonly used in this paradigm. However, different from standard classification tasks, detection tasks have unique class inequality characteristic, which stems from both data distribution and applied evaluation metric. Table 1 shows their differences. First, positive instances are commonly sparsely distributed in detection tasks. For example, in event detection, less than 2% of words are a trigger of an event in RichERE dataset (Song et al., 2015). Furthermore, detection tasks are commonly evaluated using F-measure on positive classes, rather than accuracy or F-measure on all classes. Therefore positive and negative classes play different roles in the evaluation: the performance is evaluated by only considering how well we can detect positive instances, while correct predictions of negative instances are ignored. Due to the class inequality characteristic, reported results indicate that simply applying stan1034 dard classification paradigm to detection tasks will result in deficient performance (Anand et al., 1993; Carvajal et al., 2004; Lin et al., 2017). This is because minimizing cross-entropy loss function corresponds to maximize the accuracy of neural networks on all training instances, rather than Fmeasure on positive classes. Furthermore, due to the positive sparsity problem, training procedure will easily achieve a high accuracy on negative class, but is difficult to converge on positive classes and often leads to a low recall rate. Although simple sampling heuristics can alleviate this problem to some extent, they either suffer from losing inner class information or over-fitting positive instances (He and Garcia, 2009; Fern´andez-Navarro et al., 2011), which often result in instability during the training procedure. Some previous approaches (Joachims, 2005; Jansche, 2005, 2007; Dembczynski et al., 2011; Chinta et al., 2013; Narasimhan et al., 2014; Natarajan et al., 2016) tried to solve this problem by directly optimizing F-measure. Parambath et al. (2014) proved that it is sufficient to solve F-measure optimization problem via cost-sensitive learning, where class-specific cost factors are applied to indicate the importance of different classes to F-measure. However, optimal factors are not known a priori so ε-search needs to be applied, which is too time consuming for the optimization of neural networks. To solve the class inequality problem for sparse detection model optimization, this paper proposes a theoretical framework to quantify the importance of positive/negative instances during training. We borrow the idea of marginal utility from Economics (Stigler, 1950), and regard the evaluation metric (i.e., F-measure commonly) as the utility to optimize. Based on the above idea, the importance of an instance is measured using the marginal utility of correctly predicting it. For standard classification tasks evaluated using accuracy, our framework proves that correct predictions of positive and negative instances will have equal and unchanged marginal utility, i.e., all instances are with the same importance. For detection problems evaluated using F-measure, our framework proves that the utility of correctly predicting one more positive instance (marginal positive utility) and that of correctly predicting one more negative instance (marginal negative utility) are different and dynamically changed during model training. That is, the importance of instances of each class is not only determined by their data distribution, but also affected by how well the current model can converge on different classes. Based on the above framework, we propose adaptive scaling, a dynamic cost-sensitive learning algorithm which adaptively scales costs of instances of different classes with above quantified importance during the training procedure, and thus can make the optimization criteria consistent with the evaluation metric. Furthermore, a batchwise version of our adaptive scaling algorithm is proposed to make it directly applicable as a plug-in of conventional neural network training algorithms. Compared with previous methods, adaptive scaling is designed based on marginal utility framework and doesn’t introduce any additional hyper-parameter, and therefore is more efficient and stable to transfer among datasets and models. Generally, the main contributions of this paper are: • We propose a marginal utility based framework for detection model optimization, which can dynamically quantify instance importance to different evaluation metrics. • Based on the above framework, we present adaptive scaling, a plug-in algorithm which can effectively resolve the class inequality problem in neural detection model optimization via dynamic cost-sensitive learning. We conducted experimental studies1 on event detection, a typical sparse detection task in IE. We thoroughly compared various methods for adapting classical neural network models into detection problems. Experiment results show that our adaptive scaling algorithm not only achieves a better performance, but also is more stable and more adaptive for training neural networks on various models and datasets. 2 Background Relation between Accuracy Metric and CrossEntropy Loss. Recent neural network methods usually regard detection problems as standard classification tasks, with several positive classes to detect, and one negative class for other irrelevant 1Our source code is openly available at github.com/ sanmusunrise/AdaScaling. 1035 instances. Formally, given P positive training instances P = {(xi, yi)P i=1}, and N negative instances N = {(xi, yi)N i=1} (due to positive sparsity, P ≪N), the training of neural network classifiers usually involves in minimizing the softmax cross-entropy loss function regarding to model parameters θ: LCE(θ) = − 1 P + N X (xi,yi)∈P S N log p(yi|xi; θ) (1) and if P, N →∞, we have lim P,N→∞LCE(θ) = −E[log p(y|x; θ)] = −log(Accuracy) (2) which reveals that minimizing cross-entropy loss corresponds to maximize the expected accuracy of the classifier on training data. Divergence between F-Measure and CrossEntropy Loss. However, detection tasks are mostly evaluated using F-measure computed on positive classes, which makes it unsuitable to optimize classifiers using cross-entropy loss. For instance, due to the positive sparsity, simply classifying all instances into negative class will achieve a high accuracy but zero F-measure. To show where this divergence comes from, let c1, c2, ..., ck−1 denote k−1 positive classes and ck is the negative class, we define TP = Pk−1 i=1 TPi, where TPi is the population of correctly predicted instances of positive class ci. TN denotes the number of correctly predicted negative instances. PE represents positive-positive error, where an instance is classified into one positive class ci but its golden label is another positive class cj. Then we have following metrics2: Accuracy = TP + TN P + N (3) Precision = TP N −TN + PE + TP (4) Recall = TP P (5) Fβ = (1 + β2) Precision · Recall β2 · Precision + Recall = (1 + β2) TP β2P + N −TN + PE + TP (6) where β in Fβ is a factor indicating the metric attaches β times as much importance to recall as 2This paper considers micro-averaged metrics. But our conclusions can be easily extended to macro-averaged metrics by scaling above-mentioned coefficients with sample sizes of each class. precision. We can easily see that for accuracy metric, correct predictions of positive and negative instances are equally regarded (i.e., TP and TN are symmetric), which is consistent with crossentropy loss function. However, when measuring using F-measure, this condition is no longer holding. The importance varies from different classes (i.e., TP and TN are asymmetric). Therefore, to make the training procedure consistent with F-measure, it is critical to take this importance difference into consideration. F-measure Optimization via Cost-sensitive Learning. Parambath et al. (2014) have shown that F-measure can be optimized via cost-sensitive learning, where a cost (importance) is set for each class for adjusting their impact on model learning. However, most previous studies set such costs manually (Anand et al., 1993; Domingos, 1999; Krawczyk et al., 2014) or search them on large scale dataset (Nan et al., 2012; Parambath et al., 2014), whose best settings are not transferable and very time-consuming to find for neural network models. This motivates us to develop a theoretical framework for measuring such importance. 3 Adaptive Scaling for Sparse Detection This section describes how to effectively optimize neural network detection models via dynamic cost-sensitive learning. Specifically, we first propose a marginal utility based theoretical framework for measuring the importance of positive/negative instances. Then we present our adaptive scaling algorithm, which can leverage the importance of each class for effective and robust training of neural network detection models. Finally, a batch-wise version of our algorithm is proposed to make it can be applied as a plug-in of batch-based neural network training algorithms. 3.1 Marginal Utility based Importance Measuring Conventional methods commonly deal with the class inequality problem in sparse detection by deemphasizing the importance of negative instances during training. This raises two questions: 1) How to quantify the importance of instances of each class? As mentioned by Parambath et al. (2014), that importance is related to the convergence ability of models, which means that this problem cannot be solved by only considering the distribution of training data. 2) Is the im1036 portance of positive/negative instances remaining unchanged during the entire training process? If not, how it changes according to the convergence of the model? To this end, we borrow the idea of marginal utility from economics, which means the change of utility from consuming one more unit of product. In detection tasks, we regard its evaluation metric (F-measure) as the utility function. The increment of utility from correctly predicting one more positive instance (marginal positive utility) can be regarded as the relative importance of positive classes, and that from correctly predicting one more negative instance (marginal negative utility) is look upon as the relative importance of the negative class. If marginal positive utility overweighs marginal negative utility, positive instances should be considered more important during optimization because it can lead to more improvement on the evaluation metric. In contrast, if marginal negative utility is higher, training procedure should incline to negative instances since it is more effective for optimizing the evaluation metric. Formally, we derive marginal positive utility MU(TP) and marginal negative utility MU(TN) by computing the partial derivative of the evaluation metric with respect to TP and TN respectively. For instance, the marginal positive utility MUacc(TP) and the marginal negative utility MUacc(TN) regarding to accuracy metric are: MUacc(TP) = ∂(Accuracy) ∂(TP) = 1 P + N (7) MUacc(TN) = ∂(Accuracy) ∂(TN) = 1 P + N (8) We can see that MUacc(TP) and MUacc(TN) are equal and constant regardless of the values of TP and TN. This indicates that, to optimize accuracy, we can simply treat positive and negative instances equally during the training phase, and this is what we exactly do when optimizing cross-entropy loss in Equation 1. For detection problems evaluated using F-measure, we can obtain the marginal utilities from Equation 6 as: MUFβ(TP) = (1 + β2)(β2P + N −TN + PE) (β2P + N −TN + PE + TP)2 (9) MUFβ(TN) = (1 + β2) · TP (β2P + N −TN + PE + TP)2 (10) This result is different from that of accuracy metric. First, MUFβ(TP) and MUFβ(TN) is no longer equal, indicating that the importance of positive/negative instances to F-measure are different. Besides, it is notable that MUFβ(TP) and MUFβ(TN) are dynamically changed during the training phase and are highly related to how well current model can fit positive instances and negative instances, i.e., TP and TN. 3.2 Adaptive Scaling Algorithm In this section, we describe how to incorporate the above importance measures into the training procedure of neural networks, so that it can dynamically adjust weights of positive and negative instances regarding to F-measure. Specifically, given the current model of neural networks parameterized by θ, let wβ(θ) denote the relative importance of negative instances to positive instances for Fβ-measure. Then wβ(θ) can be computed as the ratio of marginal negative utility MUFβ(TN(θ)) to the marginal positive utility MUFβ(TP(θ)), where TP(θ) and TN(θ) are TP and TN on training data with respect to θ-parameterized model: wβ(θ) = MUFβ(TN(θ)) MUFβ(TP(θ)) = TP(θ) β2P + N −TN(θ) + PE (11) Then at each iteration of the model optimization (i.e., each step of gradient descending), we want the model to take next update step proportional to the gradient of the wβ-scaled cross-entropy loss function LAS(θ) at the current point: LAS(θ) = − X (xi,yi)∈P log p(yi|xi; θ) − X (xi,yi)∈N wβ(θ) · log p(yi|xi; θ) (12) Consequently, based on the contributions that correctly predicting one more instances of each class bringing to F-measure, the training procedure dynamically adjusts its attention between positive and negative instances. Thus our adaptive scaling algorithm can take the class inequality characteristic of detection problems into consideration without introducing any additional hyper-parameter3. 3.3 Properties and Relations to Previous Empirical Conclusions In this section, we investigate the properties of our adaptive scaling algorithm. By investigating the 3Note that β is set according to the applied Fβ evaluation metric and therefore is not a hyper-parameter. 1037 change of scaling coefficient wβ(θ) during training, we find that our method has a tight relation to previous empirical conclusions on solving the class inequality problem. Property 1. The relative importance of positive/negative instances is related to the ratio of the instance number of each class, as well as how well current model can fit each class. It is easy to derive that if we fix the accuracies of each classes, wβ(θ) will be smaller if the ratio of the size of negative instances to that of the positive instances (i.e., N P ) increases. This indicates that the training procedure should pay more attention to positive instances if the empirical distribution inclines more severely towards negative class, which is identical to conventional practice that we should deemphasize more on negative instances if the positive sparsity problem is more severe (Japkowicz and Stephen, 2002). Besides, wβ(θ) highly depends on TP and TN, which is identical to previous conclusion that the best cost factors are related to the convergence ability of models (Parambath et al., 2014). Property 2. For micro-averaged F-measure, all positive instances are equally weighted regardless of the sample size of its class. Let MU(TPi) be the marginal utility of positive class ci, we have: MUFβ(TPi) = ∂(Fβ) ∂(TP) · ∂(TP) ∂(TPi) = MUFβ(TP) (13) This corresponds to the applied micro-averaged F-measure, in which all positive instances are equally considered regardless of the sample size of its class. Thus correctly predicting one more positive instance of any class will result in the same increment of micro-averaged F-measure. Property 3. The importance of negative instances increases with the rise of accuracy on positive classes. This is a straightforward consequence because if the model has higher accuracy on positive instances then it should shift more of its attention to negative ones. Besides, if the accuracy of positive class is close to zero, F-measure will also be close to zero no matter how high the accuracy on negative class is, i.e., correctly predicting negative instances can result in little F-measure increment. Therefore negative instances are inconsequential when the accuracy on positive class is low. And with the increment of positive accuracy, the importance of negative class also increases. Property 4. The importance of negative instances increased with the rise of accuracy on the negative class. This can make the training procedure incline to hard negative instances, which is similar to Focal Loss (Lin et al., 2017). During model convergence, easy negative instances can be correctly classified at the very beginning of training and its loss (negative log probability) will reduce very quickly. This is analogical to removing easy negative instances out of the training procedure and the hard negative instances remaining become more balanced proportional to positive instances. Therefore the importance wβ of remaining hard negative instances are increased to make the model fit them better. Property 5. The importance of negative instances increases when more attention is paid to precision than recall. We can see that wβ decreases with the rise of β, which indicates we focus more on recall than precision. This is identical to practice in sampling heuristics that models should attach more attention to negative instances and sub-sample more of them if evaluation metrics incline more to precision than recall. 3.4 Batch-wise Adaptive Scaling In large-scale machine learning, batch-wise gradient based algorithm is more popular and efficient for neural network training. This section presents a batch-wise version of our adaptive scaling algorithm, which uses batch-based estimator ˆ wβ(θ) to replace wβ(θ) in Equation 12. First, because the main challenge of detection tasks is to identify positive instances from background ones, rather than distinguish between positive classes, we ignore the positive-positive error PE in our experiments. In fact, we found that compared with P and N −TN, PE is much smaller and has very limited impact on the final result. Besides, for TP and TN, we approximate them using their expectation on the current batch, which can produce a robust estimation even when the batch size is not large enough. Specifically, let PB = {(xi, yi)P B i=1} denotes P B positive instances and N B = {(xi, yi)NB i=1} is NB negative instances in the batch, we estimate TP(θ) and TN(θ) as: TP B(θ) = X (xi,yi)∈PB p(yi|xi; θ) (14) TN B(θ) = X (xi,yi)∈N B p(yi|xi; θ) (15) 1038 Then we can compute the estimator ˆ wβ(θ) for wβ(θ) as: ˆ wβ(θ) = TP B(θ) β2P B + N B −TN B(θ) (16) where ˆ wβ(θ) is computed using only the instances in a batch, which makes it can be directly applied as a plug-in of conventional batch-based neural network optimization algorithm where the loss of negative instances in batch are scaled by ˆ wβ(θ). 4 Experiments 4.1 Data Preparation To assess the effectiveness of our method, we conducted experiments on event detection, which is a typical detection task in IE. We used the official evaluation datasets of TAC KBP 2017 Event Nugget Detection Evaluation (LDC2017E55) as test sets, which contains 167 English documents and 167 Chinese documents annotated with Rich ERE annotation standard. For English, we used previously annotated RichERE datasets, including LDC2015E29, LDC2015E68, LDC2016E31 and TAC KBP 2015-2016 Evaluation datasets in LDC2017E02 as the training set. For Chinese, the training set includes LDC2015E105, LDC2015E112, LDC2015E78 and the Chinese part of LDC2017E02. For both Chinese and English, we sampled 20 documents from the evaluation dataset of 2016 year as the development set. Finally, there are 866/20/167 documents in English train/development/test set and 506/20/167 documents in Chinese train/development/test set respectively. We used Stanford CoreNLP toolkit (Manning et al., 2014) for sentence splitting and word segmentation in Chinese. 4.2 Baselines To verify the effectiveness of our adaptive scaling algorithm, we conducted experiments on two state-of-the-art neural network event detection models. The first one is Dynamic Multipooling Convolutional Neural network (DMCNN) proposed by Chen et al. (2015), a one-layer CNN model with a dynamic multi-pooling operation over convolutional feature maps. The second one is BiLSTM used by Feng et al. (2016) and Yang and Mitchell (2017), where a bidirectional LSTM layer is firstly applied to the input sentence and then word-wise classification is directly conducted on the output of the BiLSTM layer of each word. We compared our method with following baselines upon above-mentioned two models: 1) Vanilla models (Vanilla), which used the original cross-entropy loss function without any additional treatment for class inequality problem. 2) Under-sampling (Sampling), which samples only part of negative instances as the training data. This is the most widely used solution in event detection (Chen et al., 2015). 3) Static scaling (Scaling), which scales loss of negative instances with a constant. This is a simple but effective cost-sensitive learning method. 4) Focal Loss (Focal) (Lin et al., 2017), which scales loss of an instance with a factor proportional to the probability of incorrectly predicting it. This method proves to be effective in some detection problems such as Object Detection. 5) Softmax-Margin Loss (CLUZH) (Makarov and Clematide, 2017), which sets additional costs for false-negative error and positive-positive error. This method was used in the 5-model ensembling CLUZH system in TAC KBP 2017 Evaluation. Besides, it also introduced several strong handcraft features, which makes it achieve the best performance on Chinese and very competitive performance on English in the evaluation. We evaluated all systems with micro-F1 metric computed using the official evaluation toolkit4. We reported the average performance of 10 runs (Mean) of each system on the official type classification task.5 We also reported the variance (Var) of the performance to evaluate the stabilities of different methods. As TAC KBP2017 allowed each team to submit 3 different runs, to make our results comparable with evaluation results, we selected 3 best runs of each system on the development set and reported the best test set performance among them, which is referred as Best3 in this paper. We applied grid search (Hsu et al., 2003) to find best hyper-parameters for all methods. 4.3 Overall results Table 2 shows the overall results on both English and Chinese. From this table, we can see that: 1) The class inequality problem is crucial for sparse detection tasks and requires special consideration. Compared with vanilla models, all 4github.com/hunterhector/EvmEval/ tarball/master 5Realis classification, another task in the evaluation, can be regarded as a standard classification task without background class, so we didn’t include it here. 1039 Model English Chinese Mean Var Best3 Mean Var Best3 CLUZH* 48.60 50.14 BiLSTM Vanilla 41.91 1.40 43.27 44.23 1.88 47.13 Focal 43.23 0.52 44.65 44.37 4.45 46.90 Sampling 46.66 0.27 47.70 48.97 0.97 50.24 Scaling 46.61 0.35 47.71 48.87 0.83 49.99 A-Scaling 47.48 0.20 48.11 49.19 0.46 50.40 DMCNN Vanilla 44.41 2.21 47.12 44.85 5.63 48.16 Focal 45.24 1.38 47.33 44.61 7.59 49.74 Sampling 46.83 0.23 47.65 50.77 2.34 52.50 Scaling 47.06 1.92 48.07 51.38 0.74 52.49 A-Scaling 47.60 0.16 48.31 51.87 0.39 52.99 Table 2: Experiment results on TAC KBP 2017 evaluation datasets. * indicates the best (ensembling) results reported in the original paper. “AScaling” is batch-wise adaptive scaling algorithm. other methods trying to tackle this problem have shown significant improvements on both models and both languages, especially on Chinese dataset where the positive sparsity problem is more severe (Makarov and Clematide, 2017). 2) It is critical to take the different roles of classes into consideration for F-measure optimization. Even down-weighting the loss assigned to well-classified examples can alleviate the positive sparsity problem by deemphasizing easy negative instances during optimization, Focal Loss cannot achieve competitive performance because it does not distinguish between different classes. 3) Marginal Utility based framework provides a solid foundation for measuring instance importance, thus makes our adaptive scaling algorithm steadily outperform all heuristic baselines. No matter on mean or Best3 metric, adaptive scaling steadily outperforms other baselines on both BiLSTM and DMCNN model. Furthermore, we can see that simple models with adaptive scaling outperform the state-of-the-art CLUZH system on Chinese (which has more severe positive sparsity problem) and achieve comparable results with it on English. Please note that CLUZH is an ensemble of five models and uses extra hand-crafted features. This verified the effectiveness of our adaptive scaling algorithm. 4) Our adaptive scaling algorithm doesn’t need additional hyper-parameters and the importance of instances is dynamically estimated. This leads to a more stable and transferable solution for detection model optimization. First, we can see that adaptive scaling has the lowest Sampling* Scaling A-Scaling 45 46 47 48 LSTM-EN Sampling Scaling* A-Scaling 45 46 47 48 DMCNN-EN Sampling Scaling A-Scaling 47 48 49 50 51 LSTM-ZH Sampling* Scaling A-Scaling 49 50 51 52 53 DMCNN-ZH Figure 1: Box plots of three different methods. * indicates outliers not shown in the figure exist. variance among all methods, which means that it is more stable than other methods. Besides, adaptive scaling doesn’t introduce any additional hyper-parameters. In contrast, in experiment we found that the best hyper-parameters for undersampling (the ratio of sampled negative instances to positive instances) and static scaling (the prior cost for negative instances) remarkably varied from models and datasets. 4.4 Stability Analysis This section investigated the stability of different methods. Table 2 have shown that adaptive scaling has a much smaller variance than other baselines. To investigate its reason, Figure 1 shows the box plots of adaptive scaling and other heuristic methods on both models and both languages. We can see that interquartile ranges (i.e., the difference between 75th and 25th percentiles of data) of the performances of adaptive scaling are smaller than other methods. In all groups of experiments, the performances of our adaptive scaling algorithm are with a smaller fluctuation. This demonstrates the stability of adaptive scaling algorithm. Furthermore, we found that conventional methods are more instable on Chinese dataset where the data distribution is more skewed. We believe that this is because: 1) Under-Sampling might undermine the inner sub-concept structure of negative class by simply dropping negative instances, and its performance depends on the quality of sampled data, which can result in the instability. 2) Static scaling sets the importance of negative instances statically in the entire training procedure. However, as shown in Section 3, the rel1040 0.5 0.8 1 2 3 5 8 10 12 β 35 40 45 50 55 DMCNN-EN Precision Recall F1 0.5 0.8 1 2 3 5 8 10 12 β 40 45 50 55 DMCNN-ZH Precision Recall F1 Figure 2: Change of Precision, Recall and F1 regarding to β using adaptive scaling on DMCNN. ative importance between different classes is dynamically changed during the training procedure, which makes static scaling incapable of achieving stable performance in different phases of training. 3) Adaptive scaling achieves more stable performance during the entire training procedure. First, it doesn’t drop any instances, so it can maintain the inner structure of negative class without any information loss. Besides, our algorithm can dynamically adjust the scaling factor during training, therefore can automatically shift attention between positive and negative classes according to the convergence state of the model. 4.5 Adaptability on Different β Figure 2 shows the change of Precision, Recall and F1 measures regarding to different β. We can see that when β increases, the precision decreased and the recall increased by contrast. This is identical to the nature of Fβ where β represents the relative importance of precision and recall. Furthermore, adaptive scaling with β = 1 achieved the best performance on F1 measure. This further demonstrates that wβ derived from our marginal utility framework is a good and adaptive estimator for the relative importance of the negative class to positive classes of Fβ measure. 5 Related Work This paper proposes adaptive scaling algorithm for sparse detection problem. Related work to this paper mainly includes: Classification on Imbalanced Data. Conventional approaches addressed data imbalance from either data-level or algorithm-level. Data-level approaches resample the training data to maintain the balance between different classes (Japkowicz and Stephen, 2002; Drummond et al., 2003). Further improvements on this direction involve how to better sampling data with minimum information loss (Carvajal et al., 2004; Estabrooks et al., 2004; Han et al., 2005; Fern´andez-Navarro et al., 2011). Algorithm-level approaches attempt to choose an appropriate inductive bias on models or algorithms to make them more suitable on data imbalance condition, including instance weighting (Ting, 2002; Lin et al., 2017), cost-sensitive learning (Anand et al., 1993; Domingos, 1999; Sun et al., 2007; Krawczyk et al., 2014) and active learning approaches (Ertekin et al., 2007a,b; Zhu and Hovy, 2007). F-Measure Optimization. Previous research on F-measure optimization mainly fell into two paradigms (Nan et al., 2012): 1) Decisiontheoretic approaches (DTA), which first estimate a probability model and find the optimal predictions according to that model (Joachims, 2005; Jansche, 2005, 2007; Dembczynski et al., 2011; BusaFekete et al., 2015; Natarajan et al., 2016). The main drawback of these methods is that they need to estimate the joint probability with exponentially many combinations, thus make them hard to use in practice; 2) Empirical utility maximization (EUM) approaches, which adapt approximate methods to find a best classifier in hypothesises (Musicant et al., 2003; Chinta et al., 2013; Parambath et al., 2014; Narasimhan et al., 2014). However, EUM methods depend on thresholds or costs that are not known a priori so time-consuming searching on large development set is required. Our adaptive scaling algorithm is partially inspired by EUM approaches, but is based on the marginal utility framework, which doesn’t introduce any additional hyper-parameter or searching procedure. Neural Network based Event Detection. Event detection is a typical task of detection problems. Recently neural network based methods have achieved significant progress in Event Detection. CNNs (Chen et al., 2015; Nguyen and Grishman, 2015) and Bi-LSTMs (Zeng et al., 2016; Yang and Mitchell, 2017) are two effective and widely used models. Some improvements have been made by jointly predicting triggers and arguments (Nguyen et al., 2016) or introducing more complicated architectures to capture larger scale of contexts (Feng et al., 2016; Nguyen and Grishman, 2016; Ghaeini et al., 2016). 6 Conclusions This paper proposes adaptive scaling algorithm for detection tasks, which can deal with its positive 1041 sparsity problem and directly optimize F-measure by adaptively scaling the influence of negative instances in loss function. Based on the marginal utility theory framework, our method leads to more effective, stable and transferable optimization of neural networks without introducing additional hyper-parameters. Experiments on event detection verified the effectiveness and stability of our adaptive scaling algorithm. The divergence between loss functions and evaluation metrics is common in NLP and machine learning. In the future we want to apply our marginal utility based framework to other metrics, such as Mean Average Precision (MAP). Acknowledgments We sincerely thank the reviewers for their valuable comments. Moreover, this work is supported by the National Natural Science Foundation of China under Grants no. 61433015, 61572477 and 61772505, and the Young Elite Scientists Sponsorship Program no. YESS20160177. References Rangachari Anand, Kishan G Mehrotra, Chilukuri K Mohan, and Sanjay Ranka. 1993. An improved algorithm for neural network classification of imbalanced training sets. IEEE Transactions on Neural Networks, 4(6):962–969. R´obert Busa-Fekete, Bal´azs Sz¨or´enyi, Krzysztof Dembczynski, and Eyke H¨ullermeier. 2015. Online f-measure optimization. In Advances in Neural Information Processing Systems, pages 595–603. K Carvajal, M Chac´on, D Mery, and G Acuna. 2004. Neural network method for failure detection with skewed class distribution. Insight-Non-Destructive Testing and Condition Monitoring, 46(7):399–402. Yubo Chen, Liheng Xu, Kang Liu, Daojian Zeng, and Jun Zhao. 2015. Event extraction via dynamic multi-pooling convolutional neural networks. In Proceedings of ACL 2015. Punya Murthy Chinta, P Balamurugan, Shirish Shevade, and M Narasimha Murty. 2013. Optimizing f-measure with non-convex loss and sparse linear classifiers. In Neural Networks (IJCNN), The 2013 International Joint Conference on, pages 1–8. IEEE. Jason PC Chiu and Eric Nichols. 2015. Named entity recognition with bidirectional lstm-cnns. arXiv preprint arXiv:1511.08308. Krzysztof J Dembczynski, Willem Waegeman, Weiwei Cheng, and Eyke H¨ullermeier. 2011. An exact algorithm for f-measure maximization. In Advances in neural information processing systems, pages 1404– 1412. Pedro M. Domingos. 1999. Metacost: A general method for making classifiers cost-sensitive. In KDD. Chris Drummond, Robert C Holte, et al. 2003. C4. 5, class imbalance, and cost sensitivity: why undersampling beats over-sampling. In Workshop on learning from imbalanced datasets II, volume 11, pages 1–8. Citeseer. Seyda Ertekin, Jian Huang, Leon Bottou, and Lee Giles. 2007a. Learning on the border: active learning in imbalanced data classification. In Proceedings of the sixteenth ACM conference on Conference on information and knowledge management, pages 127–136. ACM. Seyda Ertekin, Jian Huang, and C Lee Giles. 2007b. Active learning for class imbalance problem. In Proceedings of the 30th annual international ACM SIGIR conference on Research and development in information retrieval, pages 823–824. ACM. Andrew Estabrooks, Taeho Jo, and Nathalie Japkowicz. 2004. A multiple resampling method for learning from imbalanced data sets. Computational intelligence, 20(1):18–36. Xiaocheng Feng, Lifu Huang, Duyu Tang, Bing Qin, Heng Ji, and Ting Liu. 2016. A languageindependent neural network for event detection. In Proceedings of ACL 2016. Francisco Fern´andez-Navarro, C´esar Herv´as-Mart´ınez, and Pedro Antonio Guti´errez. 2011. A dynamic over-sampling procedure based on sensitivity for multi-class problems. Pattern Recognition, 44(8):1821–1833. Reza Ghaeini, Xiaoli Z Fern, Liang Huang, and Prasad Tadepalli. 2016. Event nugget detection with forward-backward recurrent neural networks. In Proceedings of ACL 2016. Hui Han, Wen-Yuan Wang, and Bing-Huan Mao. 2005. Borderline-smote: a new over-sampling method in imbalanced data sets learning. In International Conference on Intelligent Computing, pages 878– 887. Springer. Haibo He and Edwardo A Garcia. 2009. Learning from imbalanced data. IEEE Transactions on knowledge and data engineering, 21(9):1263–1284. Iris Hendrickx, Su Nam Kim, Zornitsa Kozareva, Preslav Nakov, Diarmuid ´O S´eaghdha, Sebastian Pad´o, Marco Pennacchiotti, Lorenza Romano, and Stan Szpakowicz. 2009. Semeval-2010 task 8: Multi-way classification of semantic relations between pairs of nominals. In Proceedings of the Workshop on Semantic Evaluations: Recent Achievements and Future Directions, pages 94–99. Association for Computational Linguistics. 1042 Chih-Wei Hsu, Chih-Chung Chang, Chih-Jen Lin, et al. 2003. A practical guide to support vector classification. Zhiheng Huang, Wei Xu, and Kai Yu. 2015. Bidirectional lstm-crf models for sequence tagging. arXiv preprint arXiv:1508.01991. Martin Jansche. 2005. Maximum expected f-measure training of logistic regression models. In Proceedings of the conference on Human Language Technology and Empirical Methods in Natural Language Processing, pages 692–699. Association for Computational Linguistics. Martin Jansche. 2007. A maximum expected utility framework for binary sequence labeling. In Proceedings of the 45th Annual Meeting of the Association of Computational Linguistics, pages 736–743. Nathalie Japkowicz and Shaju Stephen. 2002. The class imbalance problem: A systematic study. Intelligent data analysis, 6(5):429–449. Thorsten Joachims. 2005. A support vector method for multivariate performance measures. In Proceedings of the 22nd international conference on Machine learning, pages 377–384. ACM. Bartosz Krawczyk, Michał Wo´zniak, and Gerald Schaefer. 2014. Cost-sensitive decision tree ensembles for effective imbalanced classification. Applied Soft Computing, 14:554–562. Guillaume Lample, Miguel Ballesteros, Sandeep Subramanian, Kazuya Kawakami, and Chris Dyer. 2016. Neural architectures for named entity recognition. arXiv preprint arXiv:1603.01360. Tsung-Yi Lin, Priya Goyal, Ross Girshick, Kaiming He, and Piotr Doll´ar. 2017. Focal loss for dense object detection. arXiv preprint arXiv:1708.02002. Peter Makarov and Simon Clematide. 2017. UZH at TAC KBP 2017: Event nugget detection via joint learning with softmax-margin objective. In Proceedings of TAC 2017. Christopher D. Manning, Mihai Surdeanu, John Bauer, Jenny Finkel, Steven J. Bethard, and David McClosky. 2014. The Stanford CoreNLP natural language processing toolkit. In In Proceedings of ACL 2014. David R Musicant, Vipin Kumar, Aysel Ozgur, et al. 2003. Optimizing f-measure with support vector machines. In FLAIRS conference, pages 356–360. Ye Nan, Kian Ming Chai, Wee Sun Lee, and Hai Leong Chieu. 2012. Optimizing f-measure: A tale of two approaches. arXiv preprint arXiv:1206.4625. Harikrishna Narasimhan, Rohit Vaish, and Shivani Agarwal. 2014. On the statistical consistency of plugin classifiers for non-decomposable performance measures. In Advances in Neural Information Processing Systems, pages 1493–1501. Nagarajan Natarajan, Oluwasanmi Koyejo, Pradeep Ravikumar, and Inderjit Dhillon. 2016. Optimal classification with multivariate losses. In International Conference on Machine Learning, pages 1530–1538. Thien Huu Nguyen, Kyunghyun Cho, and Ralph Grishman. 2016. Joint event extraction via recurrent neural networks. In Proceedings of NAACL-HLT 2016. Thien Huu Nguyen and Ralph Grishman. 2015. Event detection and domain adaptation with convolutional neural networks. In Proceedings of ACL 2015. Thien Huu Nguyen and Ralph Grishman. 2016. Modeling skip-grams for event detection with convolutional neural networks. In Proceedings of EMNLP 2016. Shameem Puthiya Parambath, Nicolas Usunier, and Yves Grandvalet. 2014. Optimizing f-measures by cost-sensitive classification. In Advances in Neural Information Processing Systems, pages 2123–2131. Cicero Nogueira dos Santos, Bing Xiang, and Bowen Zhou. 2015. Classifying relations by ranking with convolutional neural networks. arXiv preprint arXiv:1504.06580. Zhiyi Song, Ann Bies, Stephanie Strassel, Tom Riese, Justin Mott, Joe Ellis, Jonathan Wright, Seth Kulick, Neville Ryant, and Xiaoyi Ma. 2015. From light to rich ere: annotation of entities, relations, and events. In Proceedings of the The 3rd Workshop on EVENTS: Definition, Detection, Coreference, and Representation, pages 89–98. George J Stigler. 1950. The development of utility theory. i. Journal of Political Economy, 58(4):307– 327. Yanmin Sun, Mohamed S Kamel, Andrew KC Wong, and Yang Wang. 2007. Cost-sensitive boosting for classification of imbalanced data. Pattern Recognition, 40(12):3358–3378. Kai Ming Ting. 2002. An instance-weighting method to induce cost-sensitive trees. IEEE Transactions on Knowledge and Data Engineering, 14(3):659–665. Christopher Walker, Stephanie Strassel, Julie Medero, and Kazuaki Maeda. 2006. Ace 2005 multilingual training corpus. Linguistic Data Consortium, Philadelphia, 57. Bishan Yang and Tom Mitchell. 2017. Leveraging knowledge bases in lstms for improving machine reading. In Proceedings of. Daojian Zeng, Kang Liu, Siwei Lai, Guangyou Zhou, and Jun Zhao. 2014. Relation classification via convolutional deep neural network. In Proceedings of COLING 2014, the 25th International Conference on Computational Linguistics: Technical Papers, pages 2335–2344. 1043 Ying Zeng, Honghui Yang, Yansong Feng, Zheng Wang, and Dongyan Zhao. 2016. A convolution bilstm neural network model for chinese event extraction. In Proceedings of NLPCC-ICCPOL 2016. Jingbo Zhu and Eduard Hovy. 2007. Active learning for word sense disambiguation with methods for addressing the class imbalance problem. In Proceedings of the 2007 Joint Conference on Empirical Methods in Natural Language Processing and Computational Natural Language Learning (EMNLPCoNLL).
2018
95
Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Long Papers), pages 1044–1054 Melbourne, Australia, July 15 - 20, 2018. c⃝2018 Association for Computational Linguistics 1044 Strong Baselines for Neural Semi-Supervised Learning under Domain Shift Sebastian Ruder♠♣ Barbara Plank♥3 ♠Insight Research Centre, National University of Ireland, Galway, Ireland ♣Aylien Ltd., Dublin, Ireland ♥Center for Language and Cognition, University of Groningen, The Netherlands 3Department of Computer Science, IT University of Copenhagen, Denmark [email protected],[email protected] Abstract Novel neural models have been proposed in recent years for learning under domain shift. Most models, however, only evaluate on a single task, on proprietary datasets, or compare to weak baselines, which makes comparison of models difficult. In this paper, we re-evaluate classic general-purpose bootstrapping approaches in the context of neural networks under domain shifts vs. recent neural approaches and propose a novel multi-task tri-training method that reduces the time and space complexity of classic tri-training. Extensive experiments on two benchmarks are negative: while our novel method establishes a new state-of-the-art for sentiment analysis, it does not fare consistently the best. More importantly, we arrive at the somewhat surprising conclusion that classic tri-training, with some additions, outperforms the state of the art. We conclude that classic approaches constitute an important and strong baseline. 1 Introduction Deep neural networks (DNNs) excel at learning from labeled data and have achieved state of the art in a wide array of supervised NLP tasks such as dependency parsing (Dozat and Manning, 2017), named entity recognition (Lample et al., 2016), and semantic role labeling (He et al., 2017). In contrast, learning from unlabeled data, especially under domain shift, remains a challenge. This is common in many real-world applications where the distribution of the training and test data differs. Many state-of-the-art domain adaptation approaches leverage task-specific characteristics such as sentiment words (Blitzer et al., 2006; Wu and Huang, 2016) or distributional features (Schnabel and Schütze, 2014; Yin et al., 2015) which do not generalize to other tasks. Other approaches that are in theory more general only evaluate on proprietary datasets (Kim et al., 2017) or on a single benchmark (Zhou et al., 2016), which carries the risk of overfitting to the task. In addition, most models only compare against weak baselines and, strikingly, almost none considers evaluating against approaches from the extensive semi-supervised learning (SSL) literature (Chapelle et al., 2006). In this work, we make the argument that such algorithms make strong baselines for any task in line with recent efforts highlighting the usefulness of classic approaches (Melis et al., 2017; Denkowski and Neubig, 2017). We re-evaluate bootstrapping algorithms in the context of DNNs. These are general-purpose semi-supervised algorithms that treat the model as a black box and can thus be used easily—with a few additions—with the current generation of NLP models. Many of these methods, though, were originally developed with in-domain performance in mind, so their effectiveness in a domain adaptation setting remains unexplored. In particular, we re-evaluate three traditional bootstrapping methods, self-training (Yarowsky, 1995), tri-training (Zhou and Li, 2005), and tritraining with disagreement (Søgaard, 2010) for neural network-based approaches on two NLP tasks with different characteristics, namely, a sequence prediction and a classification task (POS tagging and sentiment analysis). We evaluate the methods across multiple domains on two wellestablished benchmarks, without taking any further task-specific measures, and compare to the best results published in the literature. We make the somewhat surprising observation that classic tri-training outperforms task-agnostic state-of-the-art semi-supervised learning (Laine and Aila, 2017) and recent neural adaptation approaches (Ganin et al., 2016; Saito et al., 2017). 1045 In addition, we propose multi-task tri-training, which reduces the main deficiency of tri-training, namely its time and space complexity. It establishes a new state of the art on unsupervised domain adaptation for sentiment analysis but it is outperformed by classic tri-training for POS tagging. Contributions Our contributions are: a) We propose a novel multi-task tri-training method. b) We show that tri-training can serve as a strong and robust semi-supervised learning baseline for the current generation of NLP models. c) We perform an extensive evaluation of bootstrapping1 algorithms compared to state-of-the-art approaches on two benchmark datasets. d) We shed light on the task and data characteristics that yield the best performance for each model. 2 Neural bootstrapping methods We first introduce three classic bootstrapping methods, self-training, tri-training, and tri-training with disagreement and detail how they can be used with neural networks. For in-depth details we refer the reader to (Abney, 2007; Chapelle et al., 2006; Zhu and Goldberg, 2009). We introduce our novel multitask tri-training method in §2.3. 2.1 Self-training Self-training (Yarowsky, 1995; McClosky et al., 2006b) is one of the earliest and simplest bootstrapping approaches. In essence, it leverages the model’s own predictions on unlabeled data to obtain additional information that can be used during training. Typically the most confident predictions are taken at face value, as detailed next. Self-training trains a model m on a labeled training set L and an unlabeled data set U. At each iteration, the model provides predictions m(x) in the form of a probability distribution over classes for all unlabeled examples x in U. If the probability assigned to the most likely class is higher than a predetermined threshold τ, x is added to the labeled examples with p(x) = arg max m(x) as pseudo-label. This instantiation is the most widely used and shown in Algorithm 1. Calibration It is well-known that output probabilities in neural networks are poorly calibrated (Guo et al., 2017). Using a fixed threshold τ is thus 1We use the term bootstrapping as used in the semisupervised learning literature (Zhu, 2005), which should not be confused with the statistical procedure of the same name (Efron and Tibshirani, 1994). Algorithm 1 Self-training (Abney, 2007) 1: repeat 2: m ←train_model(L) 3: for x ∈U do 4: if max m(x) > τ then 5: L ←L ∪{(x, p(x))} 6: until no more predictions are confident not the best choice. While the absolute confidence value is inaccurate, we can expect that the relative order of confidences is more robust. For this reason, we select the top n unlabeled examples that have been predicted with the highest confidence after every epoch and add them to the labeled data. This is one of the many variants for self-training, called throttling (Abney, 2007). We empirically confirm that this outperforms the classic selection in our experiments. Online learning In contrast to many classic algorithms, DNNs are trained online by default. We compare training setups and find that training until convergence on labeled data and then training until convergence using self-training performs best. Classic self-training has shown mixed success. In parsing it proved successful only with small datasets (Reichart and Rappoport, 2007) or when a generative component is used together with a reranker in high-data conditions (McClosky et al., 2006b; Suzuki and Isozaki, 2008). Some success was achieved with careful task-specific data selection (Petrov and McDonald, 2012), while others report limited success on a variety of NLP tasks (Plank, 2011; Van Asch and Daelemans, 2016; van der Goot et al., 2017). Its main downside is that the model is not able to correct its own mistakes and errors are amplified, an effect that is increased under domain shift. 2.2 Tri-training Tri-training (Zhou and Li, 2005) is a classic method that reduces the bias of predictions on unlabeled data by utilizing the agreement of three independently trained models. Tri-training (cf. Algorithm 2) first trains three models m1, m2, and m3 on bootstrap samples of the labeled data L. An unlabeled data point is added to the training set of a model mi if the other two models mj and mk agree on its label. Training stops when the classifiers do not change anymore. Tri-training with disagreement (Søgaard, 2010) 1046 Algorithm 2 Tri-training (Zhou and Li, 2005) 1: for i ∈{1..3} do 2: Si ←bootstrap_sample(L) 3: mi ←train_model(Si) 4: repeat 5: for i ∈{1..3} do 6: Li ←∅ 7: for x ∈U do 8: if pj(x) = pk(x)(j, k ̸= i) then 9: Li ←Li ∪{(x, pj(x))} mi ←train_model(L ∪Li) 10: until none of mi changes 11: apply majority vote over mi is based on the intuition that a model should only be strengthened in its weak points and that the labeled data should not be skewed by easy data points. In order to achieve this, it adds a simple modification to the original algorithm (altering line 8 in Algorithm 2), requiring that for an unlabeled data point on which mj and mk agree, the other model mi disagrees on the prediction. Tri-training with disagreement is more data-efficient than tritraining and has achieved competitive results on part-of-speech tagging (Søgaard, 2010). Sampling unlabeled data Both tri-training and tri-training with disagreement can be very expensive in their original formulation as they require to produce predictions for each of the three models on all unlabeled data samples, which can be in the millions in realistic applications. We thus propose to sample a number of unlabeled examples at every epoch. For all traditional bootstrapping approaches we sample 10k candidate instances in each epoch. For the neural approaches we use a linearly growing candidate sampling scheme proposed by (Saito et al., 2017), increasing the candidate pool size as the models become more accurate. Confidence thresholding Similar to selftraining, we can introduce an additional requirement that pseudo-labeled examples are only added if the probability of the prediction of at least one model is higher than some threshold τ. We did not find this to outperform prediction without threshold for traditional tri-training, but thresholding proved essential for our method (§2.3). The most important condition for tri-training and tri-training with disagreement is that the models are diverse. Typically, bootstrap samples are used Figure 1: Multi-task tri-training (MT-Tri). to create this diversity (Zhou and Li, 2005; Søgaard, 2010). However, training separate models on bootstrap samples of a potentially large amount of training data is expensive and takes a lot of time. This drawback motivates our approach. 2.3 Multi-task tri-training In order to reduce both the time and space complexity of tri-training, we propose Multi-task Tritraining (MT-Tri). MT-Tri leverages insights from multi-task learning (MTL) (Caruana, 1993) to share knowledge across models and accelerate training. Rather than storing and training each model separately, we propose to share the parameters of the models and train them jointly using MTL.2 All models thus collaborate on learning a joint representation, which improves convergence. The output softmax layers are model-specific and are only updated for the input of the respective model. We show the model in Figure 1 (as instantiated for POS tagging). As the models leverage a joint representation, we need to ensure that the features used for prediction in the softmax layers of the different models are as diverse as possible, so that the models can still learn from each other’s predictions. In contrast, if the parameters in all output softmax layers were the same, the method would degenerate to self-training. To guarantee diversity, we introduce an orthogonality constraint (Bousmalis et al., 2016) as an additional loss term, which we define as follows: Lorth = ∥W ⊤ m1Wm2∥2 F (1) where | · ∥2 F is the squared Frobenius norm and Wm1 and Wm2 are the softmax output parameters 2Note: we use the term multi-task learning here albeit all tasks are of the same kind, similar to work on multi-lingual modeling treating each language (but same label space) as separate task e.g., (Fang and Cohn, 2017). It is interesting to point out that our model is further doing implicit multi-view learning by way of the orthogonality constraint. 1047 of the two source and pseudo-labeled output layers m1 and m2, respectively. The orthogonality constraint encourages the models not to rely on the same features for prediction. As enforcing pairwise orthogonality between three matrices is not possible, we only enforce orthogonality between the softmax output layers of m1 and m2,3 while m3 is gradually trained to be more target-specific. We parameterize Lorth by γ=0.01 following (Liu et al., 2017). We do not further tune γ. More formally, let us illustrate the model by taking the sequence prediction task (Figure 1) as illustration. Given an utterance with labels y1, .., yn, our Multi-task Tri-training loss consists of three task-specific (m1, m2, m3) tagging loss functions (where ⃗h is the uppermost Bi-LSTM encoding): L(θ) = − X i X 1,..,n log Pmi(y|⃗h) + γLorth (2) In contrast to classic tri-training, we can train the multi-task model with its three model-specific outputs jointly and without bootstrap sampling on the labeled source domain data until convergence, as the orthogonality constraint enforces different representations between models m1 and m2. From this point, we can leverage the pair-wise agreement of two output layers to add pseudo-labeled examples as training data to the third model. We train the third output layer m3 only on pseudo-labeled target instances in order to make tri-training more robust to a domain shift. For the final prediction, majority voting of all three output layers is used, which resulted in the best instantiation, together with confidence thresholding (τ = 0.9, except for highresource POS where τ = 0.8 performed slightly better). We also experimented with using a domainadversarial loss (Ganin et al., 2016) on the jointly learned representation, but found this not to help. The full pseudo-code is given in Algorithm 3. Computational complexity The motivation for MT-Tri was to reduce the space and time complexity of tri-training. We thus give an estimate of its efficiency gains. MT-Tri is ~3× more spaceefficient than regular tri-training; tri-training stores one set of parameters for each of the three models, while MT-Tri only stores one set of parameters (we use three output layers, but these make up a comparatively small part of the total parameter budget). In terms of time efficiency, tri-training first 3We also tried enforcing orthogonality on a hidden layer rather than the output layer, but this did not help. Algorithm 3 Multi-task Tri-training 1: m ←train_model(L) 2: repeat 3: for i ∈{1..3} do 4: Li ←∅ 5: for x ∈U do 6: if pj(x) = pk(x)(j, k ̸= i) then 7: Li ←Li ∪{(x, pj(x))} 8: if i = 3 then mi = train_model(Li) 9: elsemi ←train_model(L ∪Li) 10: until end condition is met 11: apply majority vote over mi requires to train each of the models from scratch. The actual tri-training takes about the same time as training from scratch and requires a separate forward pass for each model, effectively training three independent models simultaneously. In contrast, MT-Tri only necessitates one forward pass as well as the evaluation of the two additional output layers (which takes a negligible amount of time) and requires about as many epochs as tri-training until convergence (see Table 3, second column) while adding fewer unlabeled examples per epoch (see Section 3.4). In our experiments, MT-Tri trained about 5-6× faster than traditional tri-training. MT-Tri can be seen as a self-ensembling technique, where different variations of a model are used to create a stronger ensemble prediction. Recent approaches in this line are snapshot ensembling (Huang et al., 2017) that ensembles models converged to different minima during a training run, asymmetric tri-training (Saito et al., 2017) (ASYM) that leverages agreement on two models as information for the third, and temporal ensembling (Laine and Aila, 2017), which ensembles predictions of a model at different epochs. We tried to compare to temporal ensembling in our experiments, but were not able to obtain consistent results.4 We compare to the closest most recent method, asymmetric tritraining (Saito et al., 2017). It differs from ours in two aspects: a) ASYM leverages only pseudolabels from data points on which m1 and m2 agree, and b) it uses only one task (m3) as final predictor. In essence, our formulation of MT-Tri is closer to the original tri-training formulation (agreements on two provide pseudo-labels to the third) thereby incorporating more diversity. 4We suspect that the sparse features in NLP and the domain shift might be detrimental to its unsupervised consistency loss. 1048 Domain # labeled # unlabeled POS tagging Answers 3,489 27,274 Emails 4,900 1,194,173 Newsgroups 2,391 1,000,000 Reviews 3,813 1,965,350 Weblogs 2,031 524,834 WSJ 30,060 100,000 Sentiment Book 2,000 4,465 DVD 2,000 3,586 Electronics 2,000 5,681 Kitchen 2,000 5,945 Table 1: Number of labeled and unlabeled sentences for each domain in the SANCL 2012 dataset (Petrov and McDonald, 2012) for POS tagging (above) and the Amazon Reviews dataset (Blitzer et al., 2006) for sentiment analysis (below). 3 Experiments In order to ascertain which methods are robust across different domains, we evaluate on two widely used unsupervised domain adaptation datasets for two tasks, a sequence labeling and a classification task, cf. Table 1 for data statistics. 3.1 POS tagging For POS tagging we use the SANCL 2012 shared task dataset (Petrov and McDonald, 2012) and compare to the top results in both low and high-data conditions (Schnabel and Schütze, 2014; Yin et al., 2015). Both are strong baselines, as the FLORS tagger has been developed for this challenging dataset and it is based on contextual distributional features (excluding the word’s identity), and hand-crafted suffix and shape features (including some languagespecific morphological features). We want to gauge to what extent we can adopt a nowadays fairly standard (but more lexicalized) general neural tagger. Our POS tagging model is a state-of-the-art Bi-LSTM tagger (Plank et al., 2016) with word and 100-dim character embeddings. Word embeddings are initialized with the 100-dim Glove embeddings (Pennington et al., 2014). The BiLSTM has one hidden layer with 100 dimensions. The base POS model is trained on WSJ with early stopping on the WSJ development set, using patience 2, Gaussian noise with σ = 0.2 and word dropout with p = 0.25 (Kiperwasser and Goldberg, 2016). Regarding data, the source domain is the Ontonotes 4.0 release of the Penn treebank Wall Street Journal (WSJ) annotated for 48 fine-grained POS tags. This amounts to 30,060 labeled sentences. We use 100,000 WSJ sentences from 1988 as unlabeled data, following Schnabel and Schütze (2014).5 As target data, we use the five SANCL domains (answers, emails, newsgroups, reviews, weblogs). We restrict the amount of unlabeled data for each SANCL domain to the first 100k sentences, and do not do any pre-processing. We consider the development set of ANSWERS as our only target dev set to set hyperparameters. This may result in suboptimal per-domain settings but better resembles an unsupervised adaptation scenario. 3.2 Sentiment analysis For sentiment analysis, we evaluate on the Amazon reviews dataset (Blitzer et al., 2006). Reviews with 1 to 3 stars are ranked as negative, while reviews with 4 or 5 stars are ranked as positive. The dataset consists of four domains, yielding 12 adaptation scenarios. We use the same pre-processing and architecture as used in (Ganin et al., 2016; Saito et al., 2017): 5,000-dimensional tf-idf weighted unigram and bigram features as input; 2k labeled source samples and 2k unlabeled target samples for training, 200 labeled target samples for validation, and between 3k-6k samples for testing. The model is an MLP with one hidden layer with 50 dimensions, sigmoid activations, and a softmax output. We compare against the Variational Fair Autoencoder (VFAE) (Louizos et al., 2015) model and domain-adversarial neural networks (DANN) (Ganin et al., 2016). 3.3 Baselines Besides comparing to the top results published on both datasets, we include the following baselines: a) the task model trained on the source domain; b) self-training (Self); c) tri-training (Tri); d) tri-training with disagreement (Tri-D); and e) asymmetric tri-training (Saito et al., 2017). Our proposed model is multi-task tri-training (MTTri). We implement our models in DyNet (Neubig et al., 2017). Reporting single evaluation scores might result in biased results (Reimers and Gurevych, 2017). Throughout the paper, we report mean accuracy and standard deviation over five runs for POS tagging and over ten runs for 5Note that our unlabeled data might slightly differ from theirs. We took the first 100k sentences from the 1988 WSJ dataset from the BLLIP 1987-89 WSJ Corpus Release 1. 1049 Figure 2: Average results for unsupervised domain adaptation on the Amazon dataset. Domains: B (Book), D (DVD), E (Electronics), K (Kitchen). Results for VFAE, DANN, and Asym are from Saito et al. (2017). sentiment analysis. Significance is computed using bootstrap test. The code for all experiments is released at: https://github.com/bplank/ semi-supervised-baselines. 3.4 Results Sentiment analysis We show results for sentiment analysis for all 12 domain adaptation scenarios in Figure 2. For clarity, we also show the accuracy scores averaged across each target domain as well as a global macro average in Table 2. Model D B E K Avg VFAE* 76.57 73.40 80.53 82.93 78.36 DANN* 75.40 71.43 77.67 80.53 76.26 Asym* 76.17 72.97 80.47 83.97 78.39 Src 75.91 73.47 75.61 79.58 76.14 Self 78.00 74.55 76.54 80.30 77.35 Tri 78.72 75.64 78.60 83.26 79.05 Tri-D 76.99 74.44 78.30 80.59 77.58 MT-Tri 78.14 74.86 81.45 82.14 79.15 Table 2: Average accuracy scores for each SA target domain. *: result from Saito et al. (2017). Self-training achieves surprisingly good results but is not able to compete with tri-training. Tritraining with disagreement is only slightly better than self-training, showing that the disagreement component might not be useful when there is a strong domain shift. Tri-training achieves the best average results on two target domains and clearly outperforms the state of the art on average. MT-Tri finally outperforms the state of the art on 3/4 domains, and even slightly traditional tritraining, resulting in the overall best method. This improvement is mainly due to the B->E and D->E scenarios, on which tri-training struggles. These domain pairs are among those with the highest Adistance (Blitzer et al., 2007), which highlights that tri-training has difficulty dealing with a strong shift in domain. Our method is able to mitigate this deficiency by training one of the three output layers only on pseudo-labeled target domain examples. In addition, MT-Tri is more efficient as it adds a smaller number of pseudo-labeled examples than tri-training at every epoch. For sentiment analysis, tri-training adds around 1800-1950/2000 unlabeled examples at every epoch, while MT-Tri only adds around 100-300 in early epochs. This shows that the orthogonality constraint is useful for inducing diversity. In addition, adding fewer examples poses a smaller risk of swamping the learned representations with useless signals and is more akin to fine-tuning, the standard method for supervised domain adaptation (Howard and Ruder, 2018). We observe an asymmetry in the results between some of the domain pairs, e.g. B->D and D->B. We hypothesize that the asymmetry may be due to properties of the data and that the domains are relatively far apart e.g., in terms of A-distance. In fact, asymmetry in these domains is already reflected 1050 Target domains Model ep Answers Emails Newsgroups Reviews Weblogs Avg WSJ µpseudo Src (+glove) 87.63 ±.37 86.49 ±.35 88.60 ±.22 90.12 ±.32 92.85 ±.17 89.14 ±.28 95.49 ±.09 — Self (5) 87.64 ±.18 86.58 ±.30 88.42 ±.24 90.03 ±.11 92.80 ±.19 89.09 ±.20 95.36 ±.07 .5k Tri (4) 88.42 ±.16 87.46 ±.20 87.97 ±.09 90.72 ±.14 93.40 ±.15 89.56 ±.16 95.94 ±.07 20.5k Tri-D (7) 88.50 ±.04 87.63 ±.15 88.12 ±.05 90.76 ±.10 93.51 ±.06 89.70 ±.08 95.99 ±.03 7.7K Asym (3) 87.81 ±.19 86.97 ±.17 87.74 ±.24 90.16 ±.17 92.73 ±.16 89.08 ±.19 95.55 ±.12 1.5k MT-Tri (4) 87.92 ±.18 87.20 ±.23 87.73 ±.37 90.27 ±.10 92.96 ±.07 89.21 ±.19 95.50 ±.06 7.6k FLORS 89.71 88.46 89.82 92.10 94.20 90.86 95.80 — Table 3: Accuracy scores on dev set of target domain for POS tagging for 10% labeled data. Avg: average over the 5 SANCL domains. Hyperparameter ep (epochs) is tuned on Answers dev. µpseudo: average amount of added pseudo-labeled data. FLORS: results for Batch (u:big) from (Yin et al., 2015) (see §3). Target domains dev sets Avg on Model Answers Emails Newsgroups Reviews Weblogs targets WSJ TnT* 88.55 88.14 88.66 90.40 93.33 89.82 95.75 Stanford* 88.92 88.68 89.11 91.43 94.15 90.46 96.83 Src 88.84 ±.15 88.24 ±.12 89.45 ±.23 91.24 ±.03 93.92 ±.17 90.34 ±.14 96.69 ±.08 Tri 89.34 ±.18 88.83 ±.07 89.32 ±.21 91.62 ±.06 94.40 ±.06 90.70 ±.12 96.84 ±.04 Tri-D 89.35 ±.16 88.66 ±.09 89.29 ±.12 91.58 ±.05 94.32 ±.05 90.62 ±.09 96.85 ±.06 Src (+glove) 89.35 ±.16 88.55 ±.14 90.12 ±.31 91.48 ±.15 94.48 ±.07 90.80 ±.17 96.90 ±.04 Tri 90.00 ±.03 89.06 ±.16 90.04 ±.25 91.98 ±.11 94.74 ±.06 91.16 ±.12 96.99 ±.02 Tri-D 89.80 ±.19 88.85 ±.10 90.03 ±.22 91.98 ±.09 94.70 ±.05 91.01 ±.13 96.95 ±.05 Asym 89.51 ±.15 88.47 ±.19 89.26 ±.16 91.60 ±.20 94.28 ±.15 90.62 ±.17 96.56 ±.01 MT-Tri 89.45 ±.05 88.65 ±.04 89.40 ±.22 91.63 ±.23 94.41 ±.05 90.71 ±.12 97.37 ±.07 FLORS* 90.30 89.44 90.86 92.95 94.71 91.66 96.59 Target domains test sets Avg on Model Answers Emails Newsgroups Reviews Weblogs targets WSJ TnT* 89.36 87.38 90.85 89.67 91.37 89.73 96.57 Stanford* 89.74 87.77 91.25 90.30 92.32 90.28 97.43 Src (+glove) 90.43 ±.13 87.95 ±.18 91.83 ±.20 90.04 ±.11 92.44 ±.14 90.54 ±.15 97.50 ±.03 Tri 91.21 ±.06 88.30 ±.19 92.18 ±.19 90.06 ±.10 92.85 ±.02 90.92 ±.11 97.45 ±.03 Asym 90.62 ±.26 87.71 ±.07 91.40 ±.05 89.89 ±.22 92.37 ±.27 90.39 ±.17 97.19 ±.03 MT-Tri 90.53 ±.15 87.90 ±.07 91.45 ±.19 89.77 ±.26 92.35 ±.09 90.40 ±.15 97.37 ±.07 FLORS* 91.17 88.67 92.41 92.25 93.14 91.53 97.11 Table 4: Accuracy for POS tagging on the dev and test sets of the SANCL domains, models trained on full source data setup. Values for methods with * are from (Schnabel and Schütze, 2014). in the results of Blitzer et al. (2007) and is corroborated in the results for asymmetric tri-training (Saito et al., 2017) and our method. We note a weakness of this dataset is high variance. Existing approaches only report the mean, which makes an objective comparison difficult. For this reason, we believe it is essential to evaluate proposed approaches also on other tasks. POS tagging Results for tagging in the low-data regime (10% of WSJ) are given in Table 3. Self-training does not work for the sequence prediction task. We report only the best instantiation (throttling with n=800). Our results contribute to negative findings regarding self-training (Plank, 2011; Van Asch and Daelemans, 2016). In the low-data setup, tri-training with disagreement works best, reaching an overall average accuracy of 89.70, closely followed by classic tritraining, and significantly outperforming the baseline on 4/5 domains. The exception is newsgroups, a difficult domain with high OOV rate where none of the approches beats the baseline (see §3.4). Our proposed MT-Tri is better than asymmetric tritraining, but falls below classic tri-training. It beats 1051 Ans Email Newsg Rev Webl % unk tag 0.25 0.80 0.31 0.06 0.0 % OOV 8.53 10.56 10.34 6.84 8.45 % UWT 2.91 3.47 2.43 2.21 1.46 Accuracy on OOV tokens Src 54.26 57.48 61.80 59.26 80.37 Tri 55.53 59.11 61.36 61.16 79.32 Asym 52.86 56.78 56.58 59.59 76.84 MT-Tri 52.88 57.22 57.28 58.99 77.77 Accuracy on unknown word-tag (UWT) tokens Src 17.68 11.14 17.88 17.31 24.79 Tri 16.88 10.04 17.58 16.35 23.65 Asym 17.16 10.43 17.84 16.92 22.74 MT-Tri 16.43 11.08 17.29 16.72 23.13 FLORS* 17.19 15.13 21.97 21.06 21.65 Table 5: Accuracy scores on dev sets for OOV and unknown word-tag (UWT) tokens. the baseline significantly on only 2/5 domains (answers and emails). The FLORS tagger (Yin et al., 2015) fares better. Its contextual distributional features are particularly helpful on unknown word-tag combinations (see § 3.4), which is a limitation of the lexicalized generic bi-LSTM tagger. For the high-data setup (Table 4) results are similar. Disagreement, however, is only favorable in the low-data setups; the effect of avoiding easy points no longer holds in the full data setup. Classic tritraining is the best method. In particular, traditional tri-training is complementary to word embedding initialization, pushing the non-pre-trained baseline to the level of SRC with Glove initalization. Tritraining pushes performance even further and results in the best model, significantly outperforming the baseline again in 4/5 cases, and reaching FLORS performance on weblogs. Multi-task tritraining is often slightly more effective than asymmetric tri-training (Saito et al., 2017); however, improvements for both are not robust across domains, sometimes performance even drops. The model likely is too simplistic for such a high-data POS setup, and exploring shared-private models might prove more fruitful (Liu et al., 2017). On the test sets, tri-training performs consistently the best. POS analysis We analyze POS tagging accuracy with respect to word frequency6 and unseen word-tag combinations (UWT) on the dev sets. Table 5 (top rows) provides percentage of un6The binned log frequency was calculated with base 2 (bin 0 are OOVs, bin 1 are singletons and rare words etc). Figure 3: POS accuracy per binned log frequency. known tags, OOVs and unknown word-tag (UWT) rate. The SANCL dataset is overall very challenging: OOV rates are high (6.8-11% compared to 2.3% in WSJ), so is the unknown word-tag (UWT) rate (answers and emails contain 2.91% and 3.47% UWT compared to 0.61% on WSJ) and almost all target domains even contain unknown tags (Schnabel and Schütze, 2014) (unknown tags: ADD,GW,NFP,XX), except for weblogs. Email is the domain with the highest OOV rate and highest unknown-tag-for-known-words rate. We plot accuracy with respect to word frequency on email in Figure 3, analyzing how the three methods fare in comparison to the baseline on this difficult domain. Regarding OOVs, the results in Table 5 (second part) show that classic tri-training outperforms the source model (trained on only source data) on 3/5 domains in terms of OOV accuracy, except on two domains with high OOV rate (newsgroups and weblogs). In general, we note that tri-training works best on OOVs and on low-frequency tokens, which is also shown in Figure 3 (leftmost bins). Both other methods fall typically below the baseline in terms of OOV accuracy, but MT-Tri still outperforms Asym in 4/5 cases. Table 5 (last part) also shows that no bootstrapping method works well on unknown word-tag combinations. UWT tokens are very difficult to predict correctly using an unsupervised approach; the less lexicalized and more context-driven approach taken by FLORS is clearly superior for these cases, resulting in higher UWT accuracies for 4/5 domains. 4 Related work Learning under Domain Shift There is a large body of work on domain adaptation. Studies on unsupervised domain adaptation include early work on bootstrapping (Steedman et al., 2003; McClosky et al., 2006a), shared feature representations (Blitzer et al., 2006, 2007) and instance weighting (Jiang and Zhai, 2007). Recent ap1052 proaches include adversarial learning (Ganin et al., 2016) and fine-tuning (Sennrich et al., 2016). There is almost no work on bootstrapping approaches for recent neural NLP, in particular under domain shift. Tri-training is less studied, and only recently re-emerged in the vision community (Saito et al., 2017), albeit is not compared to classic tri-training. Neural network ensembling Related work on self-ensembling approaches includes snapshot ensembling (Huang et al., 2017) or temporal ensembling (Laine and Aila, 2017). In general, the line between “explicit” and “implicit” ensembling (Huang et al., 2017), like dropout (Srivastava et al., 2014) or temporal ensembling (Saito et al., 2017), is more fuzzy. As we noted earlier our multi-task learning setup can be seen as a form of self-ensembling. Multi-task learning in NLP Neural networks are particularly well-suited for MTL allowing for parameter sharing (Caruana, 1993). Recent NLP conferences witnessed a “tsunami” of deep learning papers (Manning, 2015), followed by what we call a multi-task learning “wave”: MTL has been successfully applied to a wide range of NLP tasks (Cohn and Specia, 2013; Cheng et al., 2015; Luong et al., 2015; Plank et al., 2016; Fang and Cohn, 2016; Søgaard and Goldberg, 2016; Ruder et al., 2017; Augenstein et al., 2018). Related to it is the pioneering work on adversarial learning (DANN) (Ganin et al., 2016). For sentiment analysis we found tri-training and our MT-Tri model to outperform DANN. Our MT-Tri model lends itself well to shared-private models such as those proposed recently (Liu et al., 2017; Kim et al., 2017), which extend upon (Ganin et al., 2016) by having separate source and target-specific encoders. 5 Conclusions We re-evaluate a range of traditional generalpurpose bootstrapping algorithms in the context of neural network approaches to semi-supervised learning under domain shift. For the two examined NLP tasks classic tri-training works the best and even outperforms a recent state-of-the-art method. The drawback of tri-training it its time and space complexity. We therefore propose a more efficient multi-task tri-training model, which outperforms both traditional tri-training and recent alternatives in the case of sentiment analysis. For POS tagging, classic tri-training is superior, performing especially well on OOVs and low frequency tokens, which suggests it is less affected by error propagation. Overall we emphasize the importance of comparing neural approaches to strong baselines and reporting results across several runs. Acknowledgments We thank the anonymous reviewers for their valuable feedback. Sebastian is supported by Irish Research Council Grant Number EBPPG/2014/30 and Science Foundation Ireland Grant Number SFI/12/RC/2289. Barbara is supported by NVIDIA corporation and thanks the Computing Center of the University of Groningen for HPC support. References Steven Abney. 2007. Semisupervised learning for computational linguistics. CRC Press. Isabelle Augenstein, Sebastian Ruder, and Anders Søgaard. 2018. Multi-task Learning of Pairwise Sequence Classification Tasks Over Disparate Label Spaces. In Proceedings of NAACL-HLT 2018. John Blitzer, Mark Dredze, and Fernando Pereira. 2007. Biographies, bollywood, boom-boxes and blenders: Domain adaptation for sentiment classification. Annual Meeting-Association for Computational Linguistics, 45(1):440. John Blitzer, Ryan McDonald, and Fernando Pereira. 2006. Domain Adaptation with Structural Correspondence Learning. Proceedings of the 2006 Conference on Empirical Methods in Natural Language Processing (EMNLP ’06), pages 120–128. Konstantinos Bousmalis, George Trigeorgis, Nathan Silberman, Dilip Krishnan, and Dumitru Erhan. 2016. Domain Separation Networks. NIPS. Rich Caruana. 1993. Multitask learning: A knowledgebased source of inductive bias. In Proceedings of the Tenth International Conference on Machine Learning. Olivier Chapelle, Bernhard Schölkopf, and Alexander Zien. 2006. Semi-Supervised Learning, volume 1. MIT press. Hao Cheng, Hao Fang, and Mari Ostendorf. 2015. Open-domain name error detection using a multitask rnn. In Proceedings of the 2015 Conference on Empirical Methods in Natural Language Processing, pages 737–746. Association for Computational Linguistics. Trevor Cohn and Lucia Specia. 2013. Modelling annotator bias with multi-task gaussian processes: An application to machine translation quality estimation. In Proceedings of the 51st Annual Meeting of the Association for Computational Linguistics (Volume 1: 1053 Long Papers), pages 32–42, Sofia, Bulgaria. Association for Computational Linguistics. Michael Denkowski and Graham Neubig. 2017. Stronger baselines for trustable results in neural machine translation. arXiv preprint arXiv:1706.09733. Timothy Dozat and Christopher D. Manning. 2017. Deep Biaffine Attention for Neural Dependency Parsing. In Proceedings of ICLR 2017. Bradley Efron and Robert J Tibshirani. 1994. An introduction to the bootstrap. CRC press. Meng Fang and Trevor Cohn. 2016. Learning when to trust distant supervision: An application to lowresource pos tagging using cross-lingual projection. In Proceedings of CoNLL-16. Meng Fang and Trevor Cohn. 2017. Model transfer for tagging low-resource languages using a bilingual dictionary. In Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (Volume 2: Short Papers), pages 587–593. Association for Computational Linguistics. Yaroslav Ganin, Evgeniya Ustinova, Hana Ajakan, Pascal Germain, Hugo Larochelle, Francois Laviolette, Mario Marchand, and Victor Lempitsky. 2016. Domain-Adversarial Training of Neural Networks. Journal of Machine Learning Research, 17:1–35. Rob van der Goot, Barbara Plank, and Malvina Nissim. 2017. To normalize, or not to normalize: The impact of normalization on part-of-speech tagging. In Proceedings of the 3rd Workshop on Noisy Usergenerated Text, pages 31–39, Copenhagen, Denmark. Association for Computational Linguistics. Chuan Guo, Geoff Pleiss, Yu Sun, and Kilian Q. Weinberger. 2017. On Calibration of Modern Neural Networks. Proceedings of ICML 2017. Luheng He, Kenton Lee, Mike Lewis, and Luke Zettlemoyer. 2017. Deep semantic role labeling: What works and what’s next. In Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 473–483, Vancouver, Canada. Association for Computational Linguistics. Jeremy Howard and Sebastian Ruder. 2018. Universal Language Model Fine-tuning for Text Classification. In Proceedings of ACL 2018. Gao Huang, Yixuan Li, Geoff Pleiss, Zhuang Liu, John E. Hopcroft, and Kilian Q. Weinberger. 2017. Snapshot Ensembles: Train 1, get M for free. In Proceedings of ICLR 2017. Jing Jiang and ChengXiang Zhai. 2007. Instance weighting for domain adaptation in nlp. In Proceedings of the 45th Annual Meeting of the Association of Computational Linguistics, pages 264–271. Association for Computational Linguistics. Young-Bum Kim, Karl Stratos, and Dongchan Kim. 2017. Adversarial adaptation of synthetic or stale data. In Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 1297–1307, Vancouver, Canada. Association for Computational Linguistics. Eliyahu Kiperwasser and Yoav Goldberg. 2016. Simple and accurate dependency parsing using bidirectional lstm feature representations. Transactions of the Association for Computational Linguistics, 4:313–327. Samuli Laine and Timo Aila. 2017. Temporal Ensembling for Semi-Supervised Learning. In Proceedings of ICLR 2017. Guillaume Lample, Miguel Ballesteros, Sandeep Subramanian, Kazuya Kawakami, and Chris Dyer. 2016. Neural Architectures for Named Entity Recognition. In NAACL-HLT 2016. Pengfei Liu, Xipeng Qiu, and Xuanjing Huang. 2017. Adversarial multi-task learning for text classification. In Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 1–10, Vancouver, Canada. Association for Computational Linguistics. Christos Louizos, Kevin Swersky, Yujia Li, Max Welling, and Richard Zemel. 2015. The variational fair autoencoder. arXiv preprint arXiv:1511.00830. Minh-Thang Luong, Quoc V Le, Ilya Sutskever, Oriol Vinyals, and Lukasz Kaiser. 2015. Multi-task sequence to sequence learning. arXiv preprint arXiv:1511.06114. Christopher D Manning. 2015. Computational linguistics and deep learning. Computational Linguistics, 41(4):701–707. David McClosky, Eugene Charniak, and Mark Johnson. 2006a. Effective self-training for parsing. In Proceedings of the Human Language Technology Conference of the NAACL, Main Conference, New York City, USA. Association for Computational Linguistics. David McClosky, Eugene Charniak, and Mark Johnson. 2006b. Reranking and Self-Training for Parser Adaptation. International Conference on Computational Linguistics (COLING) and Annual Meeting of the Association for Computational Linguistics (ACL), (July):337–344. Gábor Melis, Chris Dyer, and Phil Blunsom. 2017. On the State of the Art of Evaluation in Neural Language Models. In arXiv preprint arXiv:1707.05589. Graham Neubig, Chris Dyer, Yoav Goldberg, Austin Matthews, Waleed Ammar, Antonios Anastasopoulos, Miguel Ballesteros, David Chiang, Daniel Clothiaux, Trevor Cohn, et al. 2017. Dynet: The dynamic neural network toolkit. arXiv preprint arXiv:1701.03980. 1054 Jeffrey Pennington, Richard Socher, and Christopher D. Manning. 2014. Glove: Global vectors for word representation. In Empirical Methods in Natural Language Processing (EMNLP), pages 1532–1543. Slav Petrov and Ryan McDonald. 2012. Overview of the 2012 shared task on parsing the web. Notes of the First Workshop on Syntactic Analysis of NonCanonical Language (SANCL), 59. Barbara Plank. 2011. Domain adaptation for parsing. University Library Groningen. Barbara Plank, Anders Søgaard, and Yoav Goldberg. 2016. Multilingual Part-of-Speech Tagging with Bidirectional Long Short-Term Memory Models and Auxiliary Loss. In Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics. Roi Reichart and Ari Rappoport. 2007. Self-training for enhancement and domain adaptation of statistical parsers trained on small datasets. In Proceedings of the 45th Annual Meeting of the Association of Computational Linguistics, pages 616–623. Nils Reimers and Iryna Gurevych. 2017. Reporting score distributions makes a difference: Performance study of lstm-networks for sequence tagging. In Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing, pages 338– 348, Copenhagen, Denmark. Association for Computational Linguistics. Sebastian Ruder, Joachim Bingel, Isabelle Augenstein, and Anders Søgaard. 2017. Learning what to share between loosely related tasks. arXiv preprint arXiv:1705.08142. Kuniaki Saito, Yoshitaka Ushiku, and Tatsuya Harada. 2017. Asymmetric Tri-training for Unsupervised Domain Adaptation. In ICML 2017. Tobias Schnabel and Hinrich Schütze. 2014. FLORS: Fast and Simple Domain Adaptation for Part-ofSpeech Tagging. TACL, 2:15–26. Rico Sennrich, Barry Haddow, and Alexandra Birch. 2016. Improving neural machine translation models with monolingual data. In Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 86–96, Berlin, Germany. Association for Computational Linguistics. Anders Søgaard. 2010. Simple semi-supervised training of part-of-speech taggers. In Proceedings of the ACL 2010 Conference Short Papers, pages 205–208. Anders Søgaard and Yoav Goldberg. 2016. Deep multitask learning with low level tasks supervised at lower layers. In Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics, pages 231–235, Berlin, Germany. Association for Computational Linguistics. Nitish Srivastava, Geoffrey Hinton, Alex Krizhevsky, Ilya Sutskever, and Ruslan Salakhutdinov. 2014. Dropout: A Simple Way to Prevent Neural Networks from Overfitting. Journal of Machine Learning Research, 15:1929–1958. Mark Steedman, Rebecca Hwa, Stephen Clark, Miles Osborne, Anoop Sarkar, Julia Hockenmaier, Paul Ruhlen, Steven Baker, and Jeremiah Crim. 2003. Example selection for bootstrapping statistical parsers. In Proceedings of the 2003 Human Language Technology Conference of the North American Chapter of the Association for Computational Linguistics. Jun Suzuki and Hideki Isozaki. 2008. Semi-supervised sequential labeling and segmentation using gigaword scale unlabeled data. pages 665–673. Vincent Van Asch and Walter Daelemans. 2016. Predicting the effectiveness of self-training: Application to sentiment classification. arXiv preprint arXiv:1601.03288. Fangzhao Wu and Yongfeng Huang. 2016. Sentiment Domain Adaptation with Multiple Sources. Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics (ACL 2016), pages 301–310. David Yarowsky. 1995. Unsupervised Word Sense Disambiguation Rivaling Supervised Methods. In Proceedings of the 33rd annual meeting on Association for Computational Linguistics. Wenpeng Yin, Tobias Schnabel, and Hinrich Schütze. 2015. Online Updating of Word Representations for Part-of-Speech Tagging. In Proceedings of the 2015 Conference on Empirical Methods in Natural Language Processing, September, pages 1329–1334. Guangyou Zhou, Zhiwen Xie, Jimmy Xiangji Huang, and Tingting He. 2016. Bi-transferring deep neural networks for domain adaptation. In Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 322–332, Berlin, Germany. Association for Computational Linguistics. Zhi-Hua Zhou and Ming Li. 2005. Tri-Training: Exploiting Unlabeled Data Using Three Classifiers. IEEE Trans.Data Eng., 17(11):1529–1541. Xiaojin Zhu. 2005. Semi-Supervised Learning Literature Survey. Technical Report 1530, Computer Sciences, University of Wisconsin-Madison. Xiaojin Zhu and Andrew B Goldberg. 2009. Introduction to semi-supervised learning. Synthesis lectures on artificial intelligence and machine learning, 3(1):1–130.
2018
96
Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Long Papers), pages 1055–1065 Melbourne, Australia, July 15 - 20, 2018. c⃝2018 Association for Computational Linguistics 1055 Fluency Boost Learning and Inference for Neural Grammatical Error Correction Tao Ge Furu Wei Ming Zhou Microsoft Research Asia, Beijing, China {tage, fuwei, mingzhou}@microsoft.com Abstract Most of the neural sequence-to-sequence (seq2seq) models for grammatical error correction (GEC) have two limitations: (1) a seq2seq model may not be well generalized with only limited error-corrected data; (2) a seq2seq model may fail to completely correct a sentence with multiple errors through normal seq2seq inference. We attempt to address these limitations by proposing a fluency boost learning and inference mechanism. Fluency boosting learning generates fluency-boost sentence pairs during training, enabling the error correction model to learn how to improve a sentence’s fluency from more instances, while fluency boosting inference allows the model to correct a sentence incrementally through multi-round seq2seq inference until the sentence’s fluency stops increasing. Experiments show our approaches improve the performance of seq2seq models for GEC, achieving state-of-the-art results on both CoNLL2014 and JFLEG benchmark datasets. 1 Introduction Sequence-to-sequence (seq2seq) models (Cho et al., 2014; Sutskever et al., 2014) for grammatical error correction (GEC) have drawn growing attention (Yuan and Briscoe, 2016; Xie et al., 2016; Ji et al., 2017; Schmaltz et al., 2017; Sakaguchi et al., 2017; Chollampatt and Ng, 2018) in recent years. However, most of the seq2seq models for GEC have two flaws. First, the seq2seq models are trained with only limited error-corrected sentence pairs like Figure 1(a). Limited by the size of training data, the models with millions of parameters may not be well generalized. Thus, it is She see Tom is catched by policeman in park at last night. She saw Tom caught by a policeman in the park last night. She sees Tom is catched by policeman in park at last night. She sees Tom caught by a policeman in the park last night. She sees Tom caught by a policeman in the park last night. She saw Tom caught by a policeman in the park last night. (a) (b) (c) seq2seq inference seq2seq inference Figure 1: (a) an error-corrected sentence pair; (b) if the sentence becomes slightly different, the model fails to correct it perfectly; (c) single-round seq2seq inference cannot perfectly correct the sentence, but multi-round inference can. common that the models fail to correct a sentence perfectly even if the sentence is slightly different from the training instance, as illustrated by Figure 1(b). Second, the seq2seq models usually cannot perfectly correct a sentence with many grammatical errors through single-round seq2seq inference, as shown in Figure 1(b) and 1(c), because some errors in a sentence may make the context strange, which confuses the models to correct other errors. To address the above-mentioned limitations in model learning and inference, this paper proposes a novel fluency boost learning and inference mechanism, illustrated in Figure 2. For fluency boosting learning, not only is a seq2seq model trained with original errorcorrected sentence pairs, but also it generates less fluent sentences (e.g., from its n-best outputs) to establish new error-corrected sentence pairs by pairing them with their correct sentences during training, as long as the sentences’ fluency1 is be1A sentence’s fluency score is defined to be inversely proportional to the sentence’s cross entropy, as is in Eq (3). 1056 She see Tom is catched by policeman in park at last night. She saw Tom caught by a policeman in the park last night. She see Tom is caught by a policeman in park last night. She sees Tom caught by a policeman in the park last night. She saw Tom caught by a policeman in the park last night. She saw Tom was caught by a policeman in the park last night. She sees Tom is catched by policeman in park at last night. …… 0.119 0.147 0.144 0.135 0.181 0.121 0.147 n-best outputs original sentence pair fluency boost sentence pair She sees Tom is catched by policeman in park at last night. She sees Tom caught by a policeman in the park last night. She saw Tom caught by a policeman in the park last night. She saw Tom caught by a policeman in the park last night. 1st round seq2seq inference 2nd round seq2seq inference 3rd round seq2seq inference 0.121 0.144 0.147 0.147 boost no boost (a) (b) boost sentence fluency fluency sentence seq2seq inference Figure 2: Fluency boost learning and inference: (a) given a training instance (i.e., an error-corrected sentence pair), fluency boost learning establishes multiple fluency boost sentence pairs from the seq2seq’s n-best outputs during training. The fluency boost sentence pairs will be used as training instances in subsequent training epochs, which helps expand the training set and accordingly benefits model learning; (b) fluency boost inference allows an error correction model to correct a sentence incrementally through multi-round seq2seq inference until its fluency score stops increasing. low that of their correct sentences, as Figure 2(a) shows. Specifically, we call the generated errorcorrected sentence pairs fluency boost sentence pairs because the sentence in the target side always improves fluency over that in the source side. The generated fluency boost sentence pairs during training will be used as additional training instances during subsequent training epochs, allowing the error correction model to see more grammatically incorrect sentences during training and accordingly improving its generalization ability. For model inference, fluency boost inference mechanism allows the model to correct a sentence incrementally with multi-round inference as long as the proposed edits can boost the sentence’s fluency, as Figure 2(b) shows. For a sentence with multiple grammatical errors, some of the errors will be corrected first. The corrected parts will make the context clearer, which may benefit the model to correct the remaining errors. Experiments demonstrate fluency boost learning and inference enable neural seq2seq models to perform better for GEC and achieve state-of-theart results on multiple GEC benchmarks. Our contributions are summarized as follows: • We present a novel learning and inference mechanism to address the limitations in previous seq2seq models for GEC. • We propose and compare multiple novel fluency boost learning strategies, exploring the learning methodology for neural GEC. • Our approaches are proven to be effective to improve neural seq2seq GEC models to achieve state-of-the-art results on CoNLL2014 and JFLEG benchmark datasets. 2 Background: Neural grammatical error correction As neural machine translation (NMT), a typical neural GEC approach uses a Recurrent Neural Network (RNN) based encoder-decoder seq2seq model (Sutskever et al., 2014; Cho et al., 2014) with attention mechanism (Bahdanau et al., 2014) to edit a raw sentence into the grammatically correct sentence it should be, as Figure 1(a) shows. Given a raw sentence xr = (xr 1, · · · , xr M) and its corrected sentence xc = (xc 1, · · · , xc N) in which xr M and xc N are the M-th and N-th words of sentence xr and xc respectively, the error correction seq2seq model learns a probabilistic mapping P(xc|xr) from error-corrected sentence pairs through maximum likelihood estimation (MLE), which learns model parameters Θcrt to maximize the following equation: Θ∗ crt = arg max Θcrt X (xr,xc)∈S∗ log P(xc|xr; Θcrt) (1) where S∗denotes the set of error-corrected sentence pairs. For model inference, an output sequence xo = (xo 1, · · · , xo i , · · · , xo L) is selected through beam search, which maximizes the following equation: P(xo|xr) = L Y i=1 P(xo i |xr, xo <i; Θcrt) (2) 1057 xr seq2seq error correction xo1 xo2 xo3 xo4 xc 0.142 0.150 0.150 0.152 0.143 0.140 xc seq2seq error generation xo1 xo2 xo3 xo4 xc 0.150 0.150 0.141 0.151 0.153 0.144 (xo1, xc) (xo4,xc) seq2seq error correction (xo3, xc) (xo4,xc) xr seq2seq error correction xo5 xo6 xo7 xo8 xc 0.142 0.150 0.150 0.152 0.143 0.140 xc seq2seq error generation xo1 xo2 xo3 xo4 xc 0.150 0.150 0.141 0.151 0.153 0.144 (xo1, xc) (xo4,xc) (xo7, xc) (xo8,xc) (a) (b) (c) Figure 3: Three fluency boost learning strategies: (a) back-boost, (b) self-boost, (c) dual-boost; all of them generate fluency boost sentence pairs (the pairs in the dashed boxes) to help model learning during training. The numbers in this figure are fluency scores of their corresponding sentences. 3 Fluency boost learning Conventional seq2seq models for GEC learns model parameters only from original errorcorrected sentence pairs. However, such errorcorrected sentence pairs are not sufficiently available. As a result, many neural GEC models are not very well generalized. Fortunately, neural GEC is different from NMT. For neural GEC, its goal is improving a sentence’s fluency2 without changing its original meaning; thus, any sentence pair that satisfies this condition (we call it fluency boost condition) can be used as a training instance. In this paper, we define f(x) as the fluency score of a sentence x: f(x) = 1 1 + H(x) (3) H(x) = − P|x| i=1 log P(xi|x<i) |x| (4) where P(xi|x<i) is the probability of xi given context x<i, computed by a language model, and |x| is the length of sentence x. H(x) is actually the cross entropy of the sentence x, whose range is [0, +∞). Accordingly, the range of f(x) is (0, 1]. The core idea of fluency boost learning is to generate fluency boost sentence pairs that satisfy the fluency boost condition during training, as Figure 2(a) illustrates, so that these pairs can further help model learning. In this section, we present three fluency boost learning strategies: back-boost, self-boost, and 2Fluency of a sentence in this paper refers to how likely the sentence is written by a native speaker. In other words, if a sentence is very likely to be written by a native speaker, it should be regarded highly fluent. dual-boost that generate fluency boost sentence pairs in different ways, as illustrated in Figure 3. 3.1 Back-boost learning Back-boost learning borrows the idea from back translation (Sennrich et al., 2016) in NMT, referring to training a backward model (we call it error generation model, as opposed to error correction model) that is used to convert a fluent sentence to a less fluent sentence with errors. Since the less fluent sentences are generated by the error generation seq2seq model trained with error-corrected data, they usually do not change the original sentence’s meaning; thus, they can be paired with their correct sentences, establishing fluency boost sentence pairs that can be used as training instances for error correction models, as Figure 3(a) shows. Specifically, we first train a seq2seq error generation model Θgen with f S∗which is identical to S∗ except that the source sentence and the target sentence are interchanged. Then, we use the model Θgen to predict n-best outputs xo1, · · · , xon given a correct sentence xc. Given the fluency boost condition, we compare the fluency of each output xok (where 1 ≤k ≤n) to that of its correct sentence xc. If an output sentence’s fluency score is much lower than its correct sentence, we call it a disfluency candidate of xc. To formalize this process, we first define Yn(x; Θ) to denote the n-best outputs predicted by model Θ given the input x. Then, disfluency candidates of a correct sentence xc can be derived: Dback(xc) = {xok|xok ∈Yn(xc; Θgen) ∧f(xc) f(xok) ≥σ} (5) 1058 Algorithm 1 Back-boost learning 1: Train error generation model Θgen with f S∗; 2: for each sentence pair (xr, xc) ∈S do 3: Compute Dback(xc) according to Eq (5); 4: end for 5: for each training epoch t do 6: S′ ←∅; 7: Derive a subset St by randomly sampling |S∗| elements from S; 8: for each (xr, xc) ∈St do 9: Establish a fluency boost pair (x′, xc) by randomly sampling x′ ∈Dback(xc); 10: S′ ←S′ ∪{(x′, xc)}; 11: end for 12: Update error correction model Θcrt with S∗∪S′; 13: end for where Dback(xc) denotes the disfluency candidate set for xc in back-boost learning. σ is a threshold to determine if xok is less fluent than xc and it should be slightly larger3 than 1.0, which helps filter out sentence pairs with unnecessary edits (e.g., I like this book. →I like the book.). In the subsequent training epochs, the error correction model will not only learn from the original error-corrected sentence pairs (xr,xc), but also learn from fluency boost sentence pairs (xok,xc) where xok is a sample of Dback(xc). We summarize this process in Algorithm 1 where S∗is the set of original error-corrected sentence pairs, and S can be tentatively considered identical to S∗when there is no additional native data to help model training (see Section 3.4). Note that we constrain the size of St not to exceed |S∗| (the 7th line in Algorithm 1) to avoid that too many fluency boost pairs overwhelm the effects of the original error-corrected pairs on model learning. 3.2 Self-boost learning In contrast to back-boost learning whose core idea is originally from NMT, self-boost learning is original, which is specially devised for neural GEC. The idea of self-boost learning is illustrated by Figure 3(b) and was already briefly introduced in Section 1 and Figure 2(a). Unlike back-boost learning in which an error generation seq2seq model is trained to generate disfluency candidates, self-boost learning allows the error correction model to generate the candidates by itself. Since the disfluency candidates generated by the error correction seq2seq model trained with error-corrected data rarely change the input 3In this paper, we set σ = 1.05 since the corrected sentence in our training data improves its corresponding raw sentence about 5% fluency on average. Algorithm 2 Self-boost learning 1: for each sentence pair (xr, xc) ∈S do 2: Dself(xc) ←∅; 3: end for 4: S′ ←∅ 5: for each training epoch t do 6: Update error correction model Θcrt with S∗∪S′; 7: S′ ←∅ 8: Derive a subset St by randomly sampling |S∗| elements from S; 9: for each (xr, xc) ∈St do 10: Update Dself(xc) according to Eq (6); 11: Establish a fluency boost pair (x′, xc) by randomly sampling x′ ∈Dself(xc); 12: S′ ←S′ ∪{(x′, xc)}; 13: end for 14: end for sentence’s meaning; thus, they can be used to establish fluency boost sentence pairs. For self-boost learning, given an error corrected pair (xr, xc), an error correction model Θcrt first predicts n-best outputs xo1, · · · , xon for the raw sentence xr. Among the n-best outputs, any output that is not identical to xc can be considered as an error prediction. Instead of treating the error predictions useless, self-boost learning fully exploits them. Specifically, if an error prediction xok is much less fluent than that of its correct sentence xc, it will be added to xc’s disfluency candidate set Dself(xc), as Eq (6) shows: Dself(xc) = Dself(xc) ∪ {xok|xok ∈Yn(xr; Θcrt) ∧f(xc) f(xok) ≥σ} (6) In contrast to back-boost learning, self-boost generates disfluency candidates from a different perspective – by editing the raw sentence xr rather than the correct sentence xc. It is also noteworthy that Dself(xc) is incrementally expanded because the error correction model Θcrt is dynamically updated, as shown in Algorithm 2. 3.3 Dual-boost learning As introduced above, back- and self-boost learning generate disfluency candidates from different perspectives to create more fluency boost sentence pairs to benefit training the error correction model. Intuitively, the more diverse disfluency candidates generated, the more helpful for training an error correction model. Inspired by He et al. (2016) and Zhang et al. (2018), we propose a dual-boost learning strategy, combining both back- and selfboost’s perspectives to generate disfluency candidates. 1059 Algorithm 3 Dual-boost learning 1: for each (xr, xc) ∈S do 2: Ddual(xc) ←∅; 3: end for 4: S′ ←∅; S′′ ←∅; 5: for each training epoch t do 6: Update error correction model Θcrt with S∗∪S′; 7: Update error generation model Θgen with f S∗∪S′′; 8: S′ ←∅; S′′ ←∅; 9: Derive a subset St by randomly sampling |S∗| elements from S; 10: for each (xr, xc) ∈St do 11: Update Ddual(xc) according to Eq (7); 12: Establish a fluency boost pair (x′, xc) by randomly sampling x′ ∈Ddual(xc); 13: S′ ←S′ ∪{(x′, xc)}; 14: Establish a reversed fluency boost pair (xc, x′′) by randomly sampling x′′ ∈Ddual(xc); 15: S′′ ←S′′ ∪{(xc, x′′)}; 16: end for 17: end for As Figure 3(c) shows, disfluency candidates in dual-boost learning are from both the error generation model and the error correction model : Ddual(xc) = Ddual(xc) ∪ {xok|xok ∈Yn(xr; Θcrt) ∪Yn(xc; Θgen) ∧f(xc) f(xok) ≥σ} (7) Moreover, the error correction model and the error generation model are dual and both of them are dynamically updated, which improves each other: the disfluency candidates produced by error generation model can benefit training the error correction model, while the disfluency candidates created by error correction model can be used as training data for the error generation model. We summarize this learning approach in Algorithm 3. 3.4 Fluency boost learning with large-scale native data Our proposed fluency boost learning strategies can be easily extended to utilize the huge volume of native data which is proven to be useful for GEC. As discussed in Section 3.1, when there is no additional native data, S in Algorithm 1–3 is identical to S∗. In the case where additional native data is available to help model learning, S becomes: S = S∗∪C where C = {(xc, xc)} denotes the set of selfcopied sentence pairs from native data. 4 Fluency boost inference As we discuss in Section 1, some sentences with multiple grammatical errors usually cannot be perfectly corrected through normal seq2seq inference Corpus #sent pair Lang-8 1,114,139 CLC 1,366,075 NUCLE 57,119 Total 2,537,333 Table 1: Error-corrected training data. which does only single-round inference. Fortunately, neural GEC is different from NMT: its source and target language are the same. The characteristic allows us to edit a sentence more than once through multi-round model inference, which motivates our fluency boost inference. As Figure 2(b) shows, fluency boost inference allows a sentence to be incrementally edited through multiround seq2seq inference as long as the sentence’s fluency can be improved. Specifically, an error correction seq2seq model first takes a raw sentence xr as an input and outputs a hypothesis xo1. Instead of regarding xo1 as the final prediction, fluency boost inference will then take xo1 as the input to generate the next output xo2. The process will not terminate unless xot does not improve xot−1 in terms of fluency. 5 Experiments 5.1 Dataset and evaluation As previous studies (Ji et al., 2017), we use the public Lang-8 Corpus (Mizumoto et al., 2011; Tajiri et al., 2012), Cambridge Learner Corpus (CLC) (Nicholls, 2003) and NUS Corpus of Learner English (NUCLE) (Dahlmeier et al., 2013) as our original error-corrected training data. Table 1 shows the stats of the datasets. In addition, we also collect 2,865,639 non-public errorcorrected sentence pairs from Lang-8.com. The native data we use for fluency boost learning is English Wikipedia that contains 61,677,453 sentences. We use CoNLL-2014 shared task dataset with original annotations (Ng et al., 2014), which contains 1,312 sentences, as our main test set for evaluation. We use MaxMatch (M2) precision, recall and F0.5 (Dahlmeier and Ng, 2012b) as our evaluation metrics. As previous studies, we use CoNLL2013 test data as our development set. 5.2 Experimental setting We set up experiments in order to answer the following questions: 1060 Model seq2seq fluency boost seq2seq (+LM) fluency boost (+LM) P R F0.5 P R F0.5 P R F0.5 P R F0.5 normal seq2seq 61.06 18.49 41.81 61.56 18.85 42.37 61.75 23.30 46.42 61.94 23.70 46.83 back-boost 61.66 19.54 43.09 61.43 19.61 43.07 61.47 24.74 47.40 61.24 25.01 47.48 self-boost 61.64 19.83 43.35 61.50 19.90 43.36 62.13 24.45 47.49 61.67 24.76 47.51 dual-boost 62.03 20.82 44.44 61.64 21.19 44.61 62.22 25.49 48.30 61.64 26.45 48.69 back-boost (+native) 63.93 22.03 46.31 63.95 22.12 46.40 62.04 27.43 49.54 61.98 27.70 49.68 self-boost (+native) 64.33 22.10 46.54 64.14 22.19 46.54 62.18 27.59 49.71 61.64 28.37 49.93 dual-boost (+native) 65.77 21.92 46.98 65.82 22.14 47.19 62.64 27.40 49.83 62.70 27.69 50.04 back-boost (+native)⋆ 67.37 24.31 49.75 67.25 24.35 49.73 64.61 28.44 51.51 64.46 28.78 51.66 self-boost (+native)⋆ 66.52 25.13 50.03 66.78 25.33 50.31 63.82 30.15 52.17 63.34 31.63 52.21 dual-boost (+native)⋆ 66.34 25.39 50.16 66.45 25.51 50.30 64.72 30.06 52.59 64.47 30.48 52.72 Table 2: Performance of seq2seq for GEC with different learning (row) and inference (column) methods on CoNLL-2014 dataset. (+LM) denotes decoding with the RNN language model through shallow fusion. The last 3 systems (with ⋆) use the additional non-public Lang-8 data for training. • Whether is fluency boost learning mechanism helpful for training the error correction model, and which of the strategies (back-boost, selfboost, dual-boost) is the most effective? • Whether does our fluency boost inference improve normal seq2seq inference for GEC? • Whether can our approach improve neural GEC to achieve state-of-the-art results? The training details for our seq2seq error correction model and error generation model are as follows: the encoder of the seq2seq models is a 2-layer bidirectional GRU RNN and the decoder is a 2-layer GRU RNN with the general attention mechanism (Luong et al., 2015). Both the dimensionality of word embeddings and the hidden size of GRU cells are 500. The vocabulary sizes of the encoder and decoder are 100,000 and 50,000 respectively. The models’ parameters are uniformly initialized in [-0.1,0.1]. We train the models with an Adam optimizer with a learning rate of 0.0001 up to 40 epochs with batch size = 128. Dropout is applied to non-recurrent connections at a ratio of 0.15. For fluency boost learning, we generate disfluency candidates from 10-best outputs. During model inference, we set beam size to 5 and decode 1-best result with a 2-layer GRU RNN language model (Mikolov et al., 2010) through shallow fusion (G¨ulc¸ehre et al., 2015) with weight β = 0.15. The RNN language model is trained from the native data mentioned in Section 5.1, which is also used for computing fluency score in Eq (3). UNK tokens are replaced with the source token with the highest attention weight. We resolve spelling errors with a public spell checker4 as preprocessing, as Xie et al. (2016) and Sakaguchi et al. (2017) do. 4https://azure.microsoft.com/en-us/services/cognitiveservices/spell-check/ 5.3 Experimental results 5.3.1 Effectiveness of fluency boost learning Table 2 compares the performance of seq2seq error correction models with different learning and inference methods. By comparing by row, one can observe that our fluency boost learning approaches improve the performance over normal seq2seq learning, especially on the recall metric, since the fluency boost learning approaches generate a variety of grammatically incorrect sentences, allowing the error correction model to learn to correct much more sentences than the conventional learning strategy. Among the proposed three fluency boost learning strategies, dual-boost achieves the best result in most cases because it produces more diverse incorrect sentences (average |Ddual| ≈ 9.43) than either back-boost (avg |Dback| ≈1.90) or self-boost learning (avg |Dself| ≈8.10). With introducing large amounts of native text data, the performance of all the fluency boost learning approaches gets improved. One reason is that our learning approaches produce more error-corrected sentence pairs to let the model be better generalized. In addition, the huge volume of native data benefits the decoder to learn better to generate a fluent and error-free sentence. We test the effect of hyper-parameter σ in Eq (5–7) on fluency boost learning and show the result in Table 3. When σ is slightly larger than 1.0 (e.g., σ = 1.05), the model achieves the best performance because it effectively avoids generating sentence pairs with unnecessary or undesirable edits that affect the performance, as we discussed in Section 3.1. When σ continues increasing, the disfluency candidate set |Ddual| drastically decreases, making the dual-boost learning gradually degrade to normal seq2seq learning. Table 4 shows some examples of disfluency 1061 σ 0 0.95 1.0 1.05 1.1 2.0 |Ddual| 41.18 39.21 29.40 9.43 3.87 0.01 F0.5 43.20 43.30 43.39 44.44 43.30 41.78 Table 3: The effect of σ on dual-boost learning with normal seq2seq inference. |Ddual| is the average size of dual-boost disfluency candidate sets. Correct sentence How autism occurs is not well understood. Disfluency candidates How autism occurs is not good understood. How autism occur is not well understood. What autism occurs is not well understood. How autism occurs is not well understand. How autism occurs does not well understood. Table 4: Examples of disfluency candidates for a correct sentence in dual-boost learning. candidates5 generated in dual-boost learning given a correct sentence in the native data. It is clear that our approach can generate less fluent sentences with various grammatical errors and most of them are typical mistakes that a human learner tends to make. Therefore, they can be used to establish high-quality training data with their correct sentence, which will be helpful for increasing the size of training data to numbers of times, accounting for the improvement by fluency boost learning. 5.3.2 Effectiveness of fluency boost inference The effectiveness of various inference approaches can be observed by comparing the results in Table 2 by column. Compared to the normal seq2seq inference and seq2seq (+LM) baselines, fluency boost inference brings about on average 0.14 and 0.18 gain on F0.5 respectively, which is a significant6 improvement, demonstrating multi-round edits by fluency boost inference is effective. Take our best system (the last row in Table 2) as an example, among 1,312 sentences in the CoNLL-2014 dataset, seq2seq inference with shallow fusion LM edits 566 sentences. In contrast, fluency boost inference additionally edits 23 sentences during the second round inference, improving F0.5 from 52.59 to 52.72. 5.3.3 Towards the state-of-the-art for GEC Now, we answer the last question raised in Section 5.2 by testing if our approaches achieve the stateof-the-art result. We first compare our best models – dual-boost learning (+native) with fluency boost inference and shallow fusion LM – to top-performing GEC systems evaluated on CoNLL-2014 dataset: 5We give more details about disfluency candidates, including error type proportion, in the supplementary notes. 6p < 0.0005 according to Wilcoxon Signed-Rank Test. System P R F0.5 Spell check 53.01 8.16 25.25 CAMB14 39.71 30.10 37.33 CAMB16SMT 45.39 21.82 37.33 CAMB16NMT 39.90 CAMB17 (CAMB16SMT based) 51.09 25.30 42.44 CAMB17 (AMU16 based) 59.88 32.16 51.08 AMU14 41.62 21.40 35.01 AMU16 61.27 27.98 49.49 AMU16⋆ 63.52 30.49 52.21 CUUI 41.78 24.88 36.79 VT16⋆ 60.17 25.64 47.40 NUS14 53.55 19.14 39.39 NUS16 44.27 NUS17 62.74 32.96 53.14 Char-seq2seq 49.24 23.77 40.56 Nested-seq2seq 45.15 Adapt-seq2seq 41.37 dual-boost (single) 62.70 27.69 50.04 dual-boost (AMU16 based) 60.57 36.02 53.30 dual-boost (single)⋆ 64.47 30.48 52.72 dual-boost (AMU16 based)⋆ 61.24 37.86 54.51 Table 5: Performance of systems on CoNLL-2014 dataset. The system with bold fonts are based on seq2seq models. ⋆denotes the system uses the non-public error-corrected data from Lang-8.com. • CAMB14, CAMB16SMT, CAMB16NMT and CAMB17: GEC systems (Felice et al., 2014; Yuan et al., 2016; Yuan and Briscoe, 2016; Yannakoudakis et al., 2017) developed by Cambridge University. • AMU14 and AMU16: SMT-based GEC systems (Junczys-Dowmunt and Grundkiewicz, 2014, 2016) developed by AMU. • CUUI and VT16: the former system (Rozovskaya et al., 2014) uses a classifier-based approach, which is improved by the latter system (Rozovskaya and Roth, 2016) through combining with an SMT-based approach. • NUS14, NUS16 and NUS17: GEC systems (Susanto et al., 2014; Chollampatt et al., 2016a; Chollampatt and Ng, 2017) that combine SMT with other techniques (e.g., classifiers). • Char-seq2seq: a character-level seq2seq model (Xie et al., 2016). It uses a rule-based method to synthesize errors for data augmentation. • Nested-seq2seq: a nested attention neural hybrid seq2seq model (Ji et al., 2017). • Adapt-seq2seq: a seq2seq model adapted to incorporate edit operations (Schmaltz et al., 2017). Table 5 shows the evaluation results on the CoNLL-2014 dataset. Without using the nonpublic training data from Lang-8.com, our sin1062 gle model obtains 50.04 F0.5, larlgely outperforming the other seq2seq models and only inferior to CAMB17 (AMU16 based) and NUS17. It should be noted, however, that the CAMB17 and NUS17 are actually re-rankers built on top of an SMTbased GEC system (AMU16’s framework); thus, they are ensemble models. When we build our approach on top of AMU16 (i.e., we take AMU16’s outputs as the input to our GEC system to edit on top of its outputs), we achieve 53.30 F0.5 score. With introducing the non-public training data, our single and ensemble system obtain 52.72 and 54.51 F0.5 score respectively, which is a stateof-the-art result7 on CoNLL-2014 dataset. Moreover, we evaluate our approach on JFLEG corpus (Napoles et al., 2017). JFLEG is the latest released dataset for GEC evaluation and it contains 1,501 sentences (754 in dev set and 747 in test set). To test our approach’s generalization ability, we evaluate our single models used for CoNLL evaluation (in Table 5) on JFLEG without re-tuning. Table 6 shows the JFLEG leaderboard. Instead of M2 score, JFLEG uses GLEU (Napoles et al., 2015) as its evaluation metric, which is a fluencyoriented GEC metric based on a variant of BLEU (Papineni et al., 2002) and has several advantages over M2 for GEC evaluation. It is observed that our single models consistently perform well on JFLEG, outperforming most of the CoNLL-2014 top-performing systems and yielding a state-ofthe-art result8 on this benchmark, demonstrating that our models are well generalized and perform stably on multiple datasets. 6 Related work Most of advanced GEC systems are classifierbased (Chodorow et al., 2007; De Felice and Pulman, 2008; Han et al., 2010; Leacock et al., 2010; Tetreault et al., 2010a; Dale and Kilgarriff, 2011) 7The state-of-the-art result on CoNLL-2014 dataset has been recently advanced by Chollampatt and Ng (2018) (F0.5=54.79) and Grundkiewicz and Junczys-Dowmunt (2018) (F0.5=56.25), which are contemporaneous to this paper. In contrast to the basic seq2seq model in this paper, they used advanced approaches for modeling (e.g., convolutional seq2seq with pre-trained word embedding, using edit operation features, ensemble decoding and advanced model combinations). It should be noted that their approaches are orthogonal to ours, making it possible to apply our fluency boost learning and inference mechanism to their models. 8The recently proposed SMT-NMT hybrid system (Grundkiewicz and Junczys-Dowmunt, 2018), which is tuned towards GLEU on JFLEG Dev set, reports a higher result (GLEU=61.50 on JFLEG test set). System JFLEG Dev JFLEG Test GLEU GLEU Source 38.21 40.54 CAMB14 42.81 46.04 CAMB16SMT 46.10 CAMB16NMT 47.20 52.05 CAMB17 (CAMB16SMT based) 47.72 CAMB17 (AMU16 based) 43.26 NUS16 46.27 50.13 NUS17 51.01 56.78 AMU16∗ 49.74 51.46 Nested-seq2seq 48.93 53.41 Sakaguchi et al. (2017)∗ 49.82 53.98 Ours 51.35 56.33 Ours (with non-public Lang-8 data) 52.93 57.74 Human 55.26 62.37 Table 6: JFLEG Leaderboard. Ours denote the single dual-boost models in Table 5. The systems with bold fonts are based on seq2seq models. ∗ denotes the system is tuned on JFLEG. or MT-based (Brockett et al., 2006; Dahlmeier and Ng, 2011, 2012a; Yoshimoto et al., 2013; Yuan and Felice, 2013; Behera and Bhattacharyya, 2013). For example, top-performing systems (Felice et al., 2014; Rozovskaya et al., 2014; JunczysDowmunt and Grundkiewicz, 2014) in CoNLL2014 shared task (Ng et al., 2014) use either of the methods. Recently, many novel approaches (Susanto et al., 2014; Chollampatt et al., 2016b,a; Rozovskaya and Roth, 2016; Junczys-Dowmunt and Grundkiewicz, 2016; Mizumoto and Matsumoto, 2016; Yuan et al., 2016; Hoang et al., 2016; Yannakoudakis et al., 2017) have been proposed for GEC. Among them, seq2seq models (Yuan and Briscoe, 2016; Xie et al., 2016; Ji et al., 2017; Sakaguchi et al., 2017; Schmaltz et al., 2017; Chollampatt and Ng, 2018) have caught much attention. Unlike the models trained only with original error-corrected data, we propose a novel fluency boost learning mechanism for dynamic data augmentation along with training for GEC, despite some previous studies that explore artificial error generation for GEC (Brockett et al., 2006; Foster and Andersen, 2009; Rozovskaya and Roth, 2010, 2011; Rozovskaya et al., 2012; Felice and Yuan, 2014; Xie et al., 2016; Rei et al., 2017). Moreover, we propose fluency boost inference which allows the model to repeatedly edit a sentence as long as the sentence’s fluency can be improved. To the best of our knowledge, it is the first to conduct multi-round seq2seq inference for GEC, while similar ideas have been proposed for NMT (Xia et al., 2017). In addition to the studies on GEC, there is also much research on grammatical error detection 1063 (Leacock et al., 2010; Rei and Yannakoudakis, 2016; Kaneko et al., 2017) and GEC evaluation (Tetreault et al., 2010b; Madnani et al., 2011; Dahlmeier and Ng, 2012c; Napoles et al., 2015; Sakaguchi et al., 2016; Napoles et al., 2016; Bryant et al., 2017; Asano et al., 2017). We do not introduce them in detail because they are not much related to this paper’s contributions. 7 Conclusion We propose a novel fluency boost learning and inference mechanism to overcome the limitations of previous neural GEC models. Our proposed fluency boost learning fully exploits both errorcorrected data and native data, largely improving the performance over normal seq2seq learning, while fluency boost inference utilizes the characteristic of GEC to incrementally improve a sentence’s fluency through multi-round inference. The powerful learning and inference mechanism enables the seq2seq models to achieve state-ofthe-art results on both CoNLL-2014 and JFLEG benchmark datasets. Acknowledgments We thank all the anonymous reviewers for their professional and constructive comments. We also thank Shujie Liu for his insightful discussions and suggestions. References Hiroki Asano, Tomoya Mizumoto, and Kentaro Inui. 2017. Reference-based metrics can be replaced with reference-less metrics in evaluating grammatical error correction systems. In IJCNLP. Dzmitry Bahdanau, Kyunghyun Cho, and Yoshua Bengio. 2014. Neural machine translation by jointly learning to align and translate. CoRR, abs/1409.0473. Bibek Behera and Pushpak Bhattacharyya. 2013. Automated grammar correction using hierarchical phrase-based statistical machine translation. In IJCNLP. Chris Brockett, William B Dolan, and Michael Gamon. 2006. Correcting esl errors using phrasal smt techniques. In COLING/ACL. Christopher Bryant, Mariano Felice, and E Briscoe. 2017. Automatic annotation and evaluation of error types for grammatical error correction. In ACL. Kyunghyun Cho, Bart van Merrienboer, Caglar Gulcehre, Dzmitry Bahdanau, Fethi Bougares, Holger Schwenk, and Yoshua Bengio. 2014. Learning phrase representations using rnn encoder–decoder for statistical machine translation. In EMNLP. Martin Chodorow, Joel R Tetreault, and Na-Rae Han. 2007. Detection of grammatical errors involving prepositions. In ACL-SIGSEM workshop on prepositions. Shamil Chollampatt, Duc Tam Hoang, and Hwee Tou Ng. 2016a. Adapting grammatical error correction based on the native language of writers with neural network joint models. In EMNLP. Shamil Chollampatt and Hwee Tou Ng. 2017. Connecting the dots: Towards human-level grammatical error correction. In Workshop on Innovative Use of NLP for Building Educational Applications. Shamil Chollampatt and Hwee Tou Ng. 2018. A multilayer convolutional encoder-decoder neural network for grammatical error correction. arXiv preprint arXiv:1801.08831. Shamil Chollampatt, Kaveh Taghipour, and Hwee Tou Ng. 2016b. Neural network translation models for grammatical error correction. arXiv preprint arXiv:1606.00189. Daniel Dahlmeier and Hwee Tou Ng. 2011. Correcting semantic collocation errors with l1-induced paraphrases. In EMNLP. Daniel Dahlmeier and Hwee Tou Ng. 2012a. A beamsearch decoder for grammatical error correction. In EMNLP/CoNLL. Daniel Dahlmeier and Hwee Tou Ng. 2012b. Better evaluation for grammatical error correction. In NAACL. Daniel Dahlmeier and Hwee Tou Ng. 2012c. Better evaluation for grammatical error correction. In NAACL. Daniel Dahlmeier, Hwee Tou Ng, and Siew Mei Wu. 2013. Building a large annotated corpus of learner english: The nus corpus of learner english. In Workshop on innovative use of NLP for building educational applications. Robert Dale and Adam Kilgarriff. 2011. Helping our own: The hoo 2011 pilot shared task. In European Workshop on Natural Language Generation. Rachele De Felice and Stephen G Pulman. 2008. A classifier-based approach to preposition and determiner error correction in l2 english. In COLING. Mariano Felice and Zheng Yuan. 2014. Generating artificial errors for grammatical error correction. In Student Research Workshop at EACL. 1064 Mariano Felice, Zheng Yuan, Øistein E Andersen, Helen Yannakoudakis, and Ekaterina Kochmar. 2014. Grammatical error correction using hybrid systems and type filtering. In CoNLL (Shared Task). Jennifer Foster and Øistein E Andersen. 2009. Generrate: generating errors for use in grammatical error detection. In Workshop on innovative use of nlp for building educational applications. Roman Grundkiewicz and Marcin Junczys-Dowmunt. 2018. Near human-level performance in grammatical error correction with hybrid machine translation. arXiv preprint arXiv:1804.05945. C¸ aglar G¨ulc¸ehre, Orhan Firat, Kelvin Xu, Kyunghyun Cho, Lo¨ıc Barrault, Huei-Chi Lin, Fethi Bougares, Holger Schwenk, and Yoshua Bengio. 2015. On using monolingual corpora in neural machine translation. CoRR, abs/1503.03535. Na-Rae Han, Joel R Tetreault, Soo-Hwa Lee, and JinYoung Ha. 2010. Using an error-annotated learner corpus to develop an esl/eflerror correction system. In LREC. Di He, Yingce Xia, Tao Qin, Liwei Wang, Nenghai Yu, Tieyan Liu, and Wei-Ying Ma. 2016. Dual learning for machine translation. In NIPS. Duc Tam Hoang, Shamil Chollampatt, and Hwee Tou Ng. 2016. Exploiting n-best hypotheses to improve an smt approach to grammatical error correction. In IJCAI. Jianshu Ji, Qinlong Wang, Kristina Toutanova, Yongen Gong, Steven Truong, and Jianfeng Gao. 2017. A nested attention neural hybrid model for grammatical error correction. In ACL. Marcin Junczys-Dowmunt and Roman Grundkiewicz. 2014. The amu system in the conll-2014 shared task: Grammatical error correction by data-intensive and feature-rich statistical machine translation. In CoNLL (Shared Task). Marcin Junczys-Dowmunt and Roman Grundkiewicz. 2016. Phrase-based machine translation is state-ofthe-art for automatic grammatical error correction. arXiv preprint arXiv:1605.06353. Masahiro Kaneko, Yuya Sakaizawa, and Mamoru Komachi. 2017. Grammatical error detection using error-and grammaticality-specific word embeddings. In IJCNLP. Claudia Leacock, Martin Chodorow, Michael Gamon, and Joel Tetreault. 2010. Automated grammatical error detection for language learners. Synthesis lectures on human language technologies, 3(1):1–134. Thang Luong, Hieu Pham, and Christopher D. Manning. 2015. Effective approaches to attention-based neural machine translation. In EMNLP. Nitin Madnani, Joel Tetreault, Martin Chodorow, and Alla Rozovskaya. 2011. They can help: Using crowdsourcing to improve the evaluation of grammatical error detection systems. In ACL. Tomas Mikolov, Martin Karafit, Lukas Burget, Jan Cernock, and Sanjeev Khudanpur. 2010. Recurrent neural network based language model. In INTERSPEECH. Tomoya Mizumoto, Mamoru Komachi, Masaaki Nagata, and Yuji Matsumoto. 2011. Mining revision log of language learning sns for automated japanese error correction of second language learners. In IJCNLP. Tomoya Mizumoto and Yuji Matsumoto. 2016. Discriminative reranking for grammatical error correction with statistical machine translation. In NAACL. Courtney Napoles, Keisuke Sakaguchi, Matt Post, and Joel Tetreault. 2015. Ground truth for grammatical error correction metrics. In ACL/IJCNLP. Courtney Napoles, Keisuke Sakaguchi, and Joel Tetreault. 2016. There’s no comparison: Referenceless evaluation metrics in grammatical error correction. In EMNLP. Courtney Napoles, Keisuke Sakaguchi, and Joel Tetreault. 2017. Jfleg: A fluency corpus and benchmark for grammatical error correction. arXiv preprint arXiv:1702.04066. Hwee Tou Ng, Siew Mei Wu, Ted Briscoe, Christian Hadiwinoto, Raymond Hendy Susanto, and Christopher Bryant. 2014. The conll-2014 shared task on grammatical error correction. In CoNLL (Shared Task). Diane Nicholls. 2003. The cambridge learner corpus: Error coding and analysis for lexicography and elt. In Corpus Linguistics 2003 conference. Kishore Papineni, Salim Roukos, Todd Ward, and WeiJing Zhu. 2002. Bleu: a method for automatic evaluation of machine translation. In ACL. Marek Rei, Mariano Felice, Zheng Yuan, and Ted Briscoe. 2017. Artificial error generation with machine translation and syntactic patterns. arXiv preprint arXiv:1707.05236. Marek Rei and Helen Yannakoudakis. 2016. Compositional sequence labeling models for error detection in learner writing. In ACL. Alla Rozovskaya, Kai-Wei Chang, Mark Sammons, Dan Roth, and Nizar Habash. 2014. The illinoiscolumbia system in the conll-2014 shared task. In CoNLL (Shared Task). Alla Rozovskaya and Dan Roth. 2010. Training paradigms for correcting errors in grammar and usage. In NAACL. 1065 Alla Rozovskaya and Dan Roth. 2011. Algorithm selection and model adaptation for esl correction tasks. In ACL. Alla Rozovskaya and Dan Roth. 2016. Grammatical error correction: Machine translation and classifiers. In ACL. Alla Rozovskaya, Mark Sammons, and Roth Dan. 2012. The ui system in the hoo 2012 shared task on error correction. In Workshop on Building Educational Applications Using NLP. Keisuke Sakaguchi, Courtney Napoles, Matt Post, and Joel Tetreault. 2016. Reassessing the goals of grammatical error correction: Fluency instead of grammaticality. Transactions of the Association of Computational Linguistics, 4(1):169–182. Keisuke Sakaguchi, Matt Post, and Benjamin Van Durme. 2017. Grammatical error correction with neural reinforcement learning. In IJCNLP. Allen Schmaltz, Yoon Kim, Alexander Rush, and Stuart Shieber. 2017. Adapting sequence models for sentence correction. In EMNLP. Rico Sennrich, Barry Haddow, and Alexandra Birch. 2016. Improving neural machine translation models with monolingual data. In ACL. Raymond Hendy Susanto, Peter Phandi, and Hwee Tou Ng. 2014. System combination for grammatical error correction. In EMNLP. Ilya Sutskever, Oriol Vinyals, and Quoc V. Le. 2014. Sequence to sequence learning with neural networks. CoRR, abs/1409.3215. Toshikazu Tajiri, Mamoru Komachi, and Yuji Matsumoto. 2012. Tense and aspect error correction for esl learners using global context. In ACL. Joel Tetreault, Jennifer Foster, and Martin Chodorow. 2010a. Using parse features for preposition selection and error detection. In ACL. Joel R Tetreault, Elena Filatova, and Martin Chodorow. 2010b. Rethinking grammatical error annotation and evaluation with the amazon mechanical turk. In Workshop on Innovative Use of NLP for Building Educational Applications. Yingce Xia, Fei Tian, Lijun Wu, Jianxin Lin, Tao Qin, Nenghai Yu, and Tie-Yan Liu. 2017. Deliberation networks: Sequence generation beyond one-pass decoding. In NIPS. Ziang Xie, Anand Avati, Naveen Arivazhagan, Dan Jurafsky, and Andrew Y Ng. 2016. Neural language correction with character-based attention. arXiv preprint arXiv:1603.09727. Helen Yannakoudakis, Marek Rei, Øistein E Andersen, and Zheng Yuan. 2017. Neural sequencelabelling models for grammatical error correction. In EMNLP. Ippei Yoshimoto, Tomoya Kose, Kensuke Mitsuzawa, Keisuke Sakaguchi, Tomoya Mizumoto, Yuta Hayashibe, Mamoru Komachi, and Yuji Matsumoto. 2013. Naist at 2013 conll grammatical error correction shared task. In CoNLL (Shared Task). Zheng Yuan and Ted Briscoe. 2016. Grammatical error correction using neural machine translation. In NAACL. Zheng Yuan, Ted Briscoe, Mariano Felice, Zheng Yuan, Ted Briscoe, and Mariano Felice. 2016. Candidate re-ranking for smt-based grammatical error correction. In Workshop on Innovative Use of NLP for Building Educational Applications. Zheng Yuan and Mariano Felice. 2013. Constrained grammatical error correction using statistical machine translation. In CoNLL (Shared Task). Zhirui Zhang, Shujie Liu, Mu Li, Ming Zhou, and Enhong Chen. 2018. Joint training for neural machine translation models with monolingual data. arXiv preprint arXiv:1803.00353.
2018
97
Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Long Papers), pages 1066–1076 Melbourne, Australia, July 15 - 20, 2018. c⃝2018 Association for Computational Linguistics 1066 A Neural Architecture for Automated ICD Coding Pengtao Xie†*, Haoran Shi§, Ming Zhang§ and Eric P. Xing† †Petuum Inc, USA *Machine Learning Department, Carnegie Mellon University, USA §School of Electronics Engineering and Computer Science, Peking University, China {pengtao.xie,eric.xing}@petuum.com [email protected], [email protected] Abstract The International Classification of Diseases (ICD) provides a hierarchy of diagnostic codes for classifying diseases. Medical coding – which assigns a subset of ICD codes to a patient visit – is a mandatory process that is crucial for patient care and billing. Manual coding is time-consuming, expensive, and errorprone. In this paper, we build a neural architecture for automated coding. It takes the diagnosis descriptions (DDs) of a patient as inputs and selects the most relevant ICD codes. This architecture contains four major ingredients: (1) tree-ofsequences LSTM encoding of code descriptions (CDs), (2) adversarial learning for reconciling the different writing styles of DDs and CDs, (3) isotonic constraints for incorporating the importance order among the assigned codes, and (4) attentional matching for performing many-toone and one-to-many mappings from DDs to CDs. We demonstrate the effectiveness of the proposed methods on a clinical datasets with 59K patient visits. 1 Introduction The International Classification of Diseases (ICD) is a healthcare classification system maintained by the World Health Organization (Organization et al., 1978). It provides a hierarchy of diagnostic codes of diseases, disorders, injuries, signs, symptoms, etc. It is widely used for reporting diseases and health conditions, assisting in medical reimbursement decisions, collecting morbidity and mortality statistics, to name a few. While ICD codes are important for making clinical and financial decisions, medical coding – which assigns proper ICD codes to a patient visit – is time-consuming, error-prone, and expensive. Medical coders review the diagnosis descriptions written by physicians in the form of textual phrases and sentences, and (if necessary) other information in the electronic health record of a clinical episode, then manually attribute the appropriate ICD codes by following the coding guidelines (O’malley et al., 2005). Several types of errors frequently occur. First, the ICD codes are organized in a hierarchical structure. For a node representing a disease C, the children of this node represent the subtypes of C. In many cases, the difference between disease subtypes is very subtle. It is common that human coders select incorrect subtypes. Second, when writing diagnosis descriptions, physicians often utilize abbreviations and synonyms, which causes ambiguity and imprecision when the coders are matching ICD codes to those descriptions (Sheppard et al., 2008). Third, in many cases, several diagnosis descriptions are closely related and should be mapped to a single ICD code. However, unexperienced coders may code each disease separately. Such errors are called unbundling. The cost incurred by coding errors and the financial investment spent on improving coding quality are estimated to be $25 billion per year in the US (Lang, 2007; Farkas and Szarvas, 2008). To reduce coding errors and cost, we aim at building an ICD coding model which automatically and accurately translates the free-text diagnosis descriptions into ICD codes. To achieve this goal, several technical challenges need to be addressed. First, there exists a hierarchical structure among the ICD codes. This hierarchy can be leveraged to improve coding accuracy. On one hand, if code A and B are both children of C, then it is unlikely to simultaneously assign A and B to a patient. On the other hand, if the distance be1067 tween A and B in the code tree is smaller than that between A and C and we know A is the correct code, then B is more likely to be a correct code than C, since codes with smaller distance are more clinically relevant. How to explore this hierarchical structure for better coding is technically demanding. Second, the diagnosis descriptions and the textual descriptions of ICD codes are written in quite different styles even if they refer to the same disease. In particular, the textual description of an ICD code is formally and precisely worded, while diagnosis descriptions are usually written by physicians in an informal and ungrammatical way, with telegraphic phrases, abbreviations, and typos. Third, it is required that the assigned ICD codes are ranked according to their relevance to the patient. How to correctly determine this order is technically nontrivial. Fourth, as stated earlier, there does not necessarily exist an one-toone mapping between diagnosis descriptions and ICD codes, and human coders should consider the overall health condition when assigning codes. In many cases, two closely related diagnosis descriptions need to be mapped onto a single combination ICD code. On the other hand, physicians may write two health conditions into one diagnosis description which should be mapped onto two ICD codes under such circumstances. Contributions In this paper, we design a neural architecture to automatically perform ICD coding given the diagnosis descriptions. Specifically, we make the following contributions: • We propose a tree-of-sequences LSTM architecture to simultaneously capture the hierarchical relationship among codes and the semantics of each code. • We use an adversarial learning approach to reconcile the heterogeneous writing styles of diagnosis descriptions and ICD code descriptions. • We use isotonic constraints to preserve the importance order among codes and develop an algorithm based on ADMM and isotonic projection to solve the constrained problem. • We use an attentional matching mechanism to perform many-to-one and one-to-many mappings between diagnosis descriptions and codes. • On a clinical datasets with 59K patient visits, we demonstrate the effectiveness of the proposed methods. The rest of the paper is organized as follows. Section 2 introduces related works. Section 3 and 4 present the dataset and methods. Section 5 gives experimental results. Section 6 presents conclusions and discussions. 2 Related Works Larkey and Croft (1996) studied the automatic assignment of ICD-9 codes to dictated inpatient discharge summaries, using a combination of three classifiers: k-nearest neighbors, relevance feedback, and Bayesian independence classifiers. This method assigns a single code to each patient visit. However, in clinical practice, each patient is usually assigned with multiple codes. Franz et al. (2000) investigated the automated coding of German-language free-text diagnosis phrases. This approach performs one-to-one mapping between diagnosis descriptions and ICD codes. This is not in accordance with the coding practice where one-to-many and many-to-one mappings widely exist (O’malley et al., 2005). Pestian et al. (2007) studied the assignment of ICD-9 codes to radiology reports. Kavuluru et al. (2013) proposed an unsupervised ensemble approach to automatically perform ICD-9 coding based on textual narratives in electronic health records (EHRs) Kavuluru et al. (2015) developed multi-label classification, feature selection, and learning to rank approaches for ICD-9 code assignment of in-patient visits based on EHRs. Koopman et al. (2015) explored the automatic ICD-10 classification of cancers from free-text death certificates. These methods did not consider the hierarchical relationship or importance order among codes. The tree LSTM network was first proposed by (Tai et al., 2015) to model the constituent or dependency parse trees of sentences. Teng and Zhang (2016) extended the unidirectional tree LSTM to a bidirectional one. Xie and Xing (2017) proposed a sequence-of-trees LSTM network to model a passage. In this network, a sequential LSTM is used to compose a sequence of tree LSTMs. The tree LSTMs are built on the constituent parse trees of individual sentences and the sequential LSTM is built on the sequence of sentences. Our proposed tree-of-sequences LSTM network differs from the previous works in twofold. First, it is applied to a code tree to capture the hierarchical relationship among codes. Second, it uses a tree LSTM to compose a hierarchy 1068 Diagnosis Descriptions 1. Prematurity at 35 4/7 weeks gestation 2. Twin number two of twin gestation 3. Respiratory distress secondary to transient tachypnea of the newborn 4. Suspicion for sepsis ruled out Assigned ICD Codes 1. V31.00 (Twin birth, mate liveborn, born in hospital, delivered without mention of cesarean section) 2. 765.18 (Other preterm infants, 2,000-2,499 grams) 3. 775.6 (Neonatal hypoglycemia) 4. 770.6 (Transitory tachypnea of newborn) 5. V29.0 (Observation for suspected infectious condition) 6. V05.3 (Need for prophylactic vaccination and inoculation against viral hepatitis) Table 1: The diagnosis descriptions of a patient visit and the assigned ICD codes. Inside the parentheses are the descriptions of the codes. The codes are ranked according to descending importance. of sequential LSTMs. Adversarial learning (Goodfellow et al., 2014) has been widely applied to image generation (Goodfellow et al., 2014), domain adaption (Ganin and Lempitsky, 2015), feature learning (Donahue et al., 2016), text generation (Yu et al., 2017), to name a few. In this paper, we use adversarial learning for mitigating the discrepancy among the writing styles of a pair of sentences. The attention mechanism was widely used in machine translation (Bahdanau et al., 2014), image captioning (Xu et al., 2015), reading comprehension (Seo et al., 2016), text classification (Yang et al., 2016), etc. In this work, we compute attention between sentences to perform many-to-one and one-to-many mappings. 3 Dataset and Preprocessing We performed the study on the publicly available MIMIC-III dataset (Johnson et al., 2016), which contains de-identified electronic health records (EHRs) of 58,976 patient visits in the Beth Israel Deaconess Medical Center from 2001 to 2012. Each EHR has a clinical note called discharge summary, which contains multiple sections of information, such as ‘discharge diagnosis’, ‘past medical history’, etc. From the ‘discharge diagnosis’ and ‘final diagnosis’ sections, we extracted the diagnosis descriptions (DDs) written by physicians. Each DD is a short phrase or a sentence, articulating a certain disease or condition. Medical coders perform ICD coding mainly based on DDs. Following such a practice, in this paper, we set the inputs of the automated coding model to be Encoder of diagnosis description Tree-of-sequences LSTM encoder of ICDcode description Adversarial reconciliation module Attentional matching module Isotonic constraints 1. Pneumonia 2. Acute kidney failure ...... Diagnosis descriptions V31.00 775.6 765.18 770.6 Assigned ICD codes Figure 1: Architecture of the ICD Coding Model the DDs while acknowledging that other information in the EHRs is also valuable and is referred to by coders for code assignment. For simplicity, we leave the incorporation of non-DD information to future study. Each patient visit is assigned with a list of ICD codes, ranked in descending order of importance and relevance. For each visit, the number of codes is usually not equal to the number of diagnosis descriptions. These ground-truth codes serve as the labels to train our coding model. The entire dataset contains 6,984 unique codes, each of which has a textual description, describing a disease, symptom, or condition. The codes are organized into a hierarchy where the top-level codes correspond to general diseases while the bottom-level ones represent specific diseases. In the code tree, children of a node represent subtypes of a disease. Table 1 shows the DDs and codes of an exemplar patient. 4 Methods In this section, we present a neural architecture for ICD coding. 4.1 Overview Figure 1 shows the overview of our approach. The proposed ICD coding model consists of five modules. The model takes the ICD-code tree and diagnosis descriptions (DDs) of a patient as inputs and assigns a set of ICD codes to the patient. The encoder of DDs generates a latent representation vector for a DD. The encoder of ICD codes is a tree-of-sequences long short-term memory (LSTM) (Hochreiter and Schmidhuber, 1997) network. It takes the textual descriptions of the ICD codes and their hierarchical structure as in1069 Neonatal Necrotizing Enterocolitis Seq LSTM Sequential LSTM Seq LSTM Seq LSTM Seq LSTM Seq LSTM Figure 2: Tree-of-Sequences LSTM puts and produces a latent representation for each code. The representation aims at simultaneously capturing the semantics of each code and the hierarchical relationship among codes. By incorporating the code hierarchy, the model can avoid selecting codes that are subtypes of the same disease and promote the selection of codes that are clinically correlated. The writing styles of DDs and code descriptions (CDs) are largely different, which makes the matching between a DD and a CD error-prone. To address this issue, we develop an adversarial learning approach to reconcile the writing styles. On top of the latent representation vectors of the descriptions, we build a discriminative network to distinguish which ones are DDs and which are CDs. The encoders of DDs and CDs try to make such a discrimination impossible. By doing this, the learned representations are independent of the writing styles and facilitate more accurate matching. The representations of DDs and CDs are fed into an attentional matching module to perform code assignment. This attentional mechanism allows multiple DDs to be matched to a single code and allows a single DD to be matched to multiple codes. During training, we incorporate the order of importance among codes as isotonic constraints. These constraints regulate the model’s weight parameters so that codes with higher importance are given larger prediction scores. 4.2 Tree-of-Sequences LSTM Encoder This section introduces the encoder of ICD codes. Each code has a description (a sequence of words) that tells the semantics of this code. We use a sequential LSTM (SLSTM) (Hochreiter and Schmidhuber, 1997) to encode this description. To capture the hierarchical relationship among codes, we build a tree LSTM (TLSTM) (Tai et al., 2015) along the code tree. At each TLSTM node, the input vector is the latent representation generated by the SLSTM. Combining these two types of LSTMs together, we obtain a tree-of-sequences LSTM network (Figure 2). Sequential LSTM A sequential LSTM (SLSTM) (Hochreiter and Schmidhuber, 1997) network is a special type of recurrent neural network that (1) learns the latent representation (which usually reflects certain semantic information) of words, and (2) models the sequential structure among words. In the word sequence, each word t is allocated with an SLSTM unit, which consists of the following components: an input gate it, a forget gate ft, an output gate ot, a memory cell ct, and a hidden state st. These components (vectors) are computed as follows: it = σ(W(i)st−1 + U(i)xt + b(i)) ft = σ(W(f)st−1 + U(f)xt + b(f)) ot = σ(W(o)st−1 + U(o)xt + b(o)) ct = it ⊙tanh(W(c)st−1 + U(c)xt + b(c)) +ft ⊙ct−1 st = ot ⊙tanh(ct) (1) where xt is the embedding vector of word t. W, U are component-specific weight matrices and b are bias vectors. Tree-of-sequences LSTM We use a bidirectional tree LSTM (TLSTM) (Tai et al., 2015; Xie and Xing, 2017) to capture the hierarchical relationships among codes. The inputs of this LSTM include the code hierarchy and hidden states of individual codes produced by the SLSTMs. It consists of a bottom-up TLSTM and a top-down TLSTM, which produce two hidden states h↑and h↓ at each node in the tree. In the bottom-up TLSTM, an internal node (representing a code C, having M children) is comprised of these components: an input gate i↑, an output gate o↑, a memory cell c↑, a hidden state h↑and M child-specific forget gates {f(m) ↑ }M m=1 where f(m) ↑ corresponds to the m-th child. The transition equations among components are: i↑= σ(PM m=1 W(i,m) ↑ h(m) ↑ + U(i)s + b(i) ↑) ∀m, f(m) ↑ = σ(W(f,m) ↑ h(m) ↑ + U(f,m)s + b(f,m) ↑ ) o↑= σ(PM m=1 W(o,m) ↑ h(m) ↑ + U(o)s + b(o) ↑) u↑= tanh(PM m=1 W(u,m) ↑ h(m) ↑ + U(u)s + b(u) ↑) c↑= i↑⊙u↑+ PM m=1 f(m) ↑ ⊙c(m) ↑ h↑= o↑⊙tanh(c↑) (2) 1070 where s is the SLSTM hidden state that encodes the description of code C; {h(m) ↑ }M m=1 and {c(m) ↑ }M m=1 are the bottom-up TLSTM hidden states and memory cells of the children. W, U, b are component-specific weight matrices and bias vectors. For a leaf node having no children, its only input is the SLSTM hidden state s and no forget gates are needed. In the top-down TLSTM, for a non-root node, it has such components: an input gate i↓, a forget gate f↓, an output gate o↓, a memory cell c↓and a hidden state h↓. The transition equations are: i↓= σ(W(i) ↓h(p) ↓ + b(i) ↓) f↓= σ(W(f) ↓h(p) ↓ + b(f) ↓) o↓= σ(W(o) ↓h(p) ↓ + b(o) ↓) u↓= tanh(W(u) ↓h(p) ↓ + b(u) ↓) c↓= i↓⊙u↓+ f↓⊙c(p) ↓ h↓= o↓⊙tanh(c↓) (3) where h(p) ↓ and c(p) ↓ are the top-down TLSTM hidden state and memory cell of the parent of this node. For the root node which has no parent, h↓ cannot be computed using the above equations. Instead, we set h↓to h↑(the bottom-up TLSTM hidden state generated at the root node). h↑captures the semantics of all codes in this hierarchy, which is then propagated downwards to each individual code via the top-down TLSTM dynamics. We concatenate the hidden states of the two directions to obtain the bidirectional TLSTM encoding of each code h = [h↑; h↓]. The bottom-up TLSTM composes the semantics of children (representing sub-diseases) and merge them into the current node, which hence captures child-to-parent relationship. The top-down TLSTM makes each node inherit the semantics of its parent, which captures parent-to-child relation. As a result, the hierarchical relationship among codes are encoded in the hidden states. For the diagnosis descriptions of a patient, we use an SLSTM network to encode each description individually. The weight parameters of this SLSTM are tied with those of the SLSTM used for encoding code descriptions. 4.3 Attentional Matching Next, we introduce how to map the DDs to codes. We denote the hidden representations of DDs and codes as {hm}M m=1 and {un}N n=1 respectively, where M is the number of DDs of one patient and N is the total number of codes in the dataset. The mapping from DDs to codes is not one-to-one. In many cases, a code is assigned only when a certain combination of K (1 < K ≤M) diseases simultaneously appear within the M DDs and the value of K depends on this code. Among the K diseases, their importance of determining the assignment of this code is different. For the rest M −K DDs, we can consider their importance score to be zero. We use a soft-attention mechanism (Bahdanau et al., 2014) to calculate these importance scores. For a code un, the importance of a DD hm to un is calculated as anm = u⊤ n hm. We normalize the scores {anm}M m=1 of all DDs into a probabilistic simplex using the softmax operation: ˜anm = exp(anm)/ PM l=1 exp(anl). Given these normalized importance scores {˜anm}M m=1, we use them to weight the representations of DDs and get a single attentional vector of the M DDs: bhn = PM m=1 ˜anmhm. Then we concatenate bhn and un, and use a linear classifier to predict the probability that code n should be assigned: pn = sigmoid(w⊤ n [bhn; un]+bn), where the coefficients wn and bias bn are specific to code n. We train the weight parameters Θ of the proposed model using the data of L patient visits. Θ includes the sequential LSTM weights Ws, tree LSTM weights Wt and weights Wp in the final prediction layer. Let c(l) ∈RN be a binary vector where c(l) n = 1 if the n-th code is assigned to this patient and c(l) n = 0 if otherwise. Θ can be learned by minimizing the following prediction loss: minΘ Lpred(Θ) = L X l=1 N X n=1 CE(p(l) n , c(l) n ) (4) where p(l) n is the predicted probability that code n is assigned to patient visit l and p(l) n is a function of Θ. CE(·, ·) is the cross-entropy loss. 4.4 Adversarial Reconciliation of Writing Styles We use an adversarial learning (Goodfellow et al., 2014) approach to reconcile the different writing styles of diagnosis descriptions (DDs) and code descriptions (CDs). The basic idea is: after encoded, if a description cannot be discerned to be a DD or a CD, then the difference in their writing styles is eliminated. We build a discriminative network which takes the encoding vector of a description as input and tries to identify it as a DD 1071 or CD. The encoders of DDs and CDs adjust their weight parameters so that such a discrimination is difficult to be achieved by the discriminative network. Consider all the descriptions {tr, yr}R r=1 where tr is a description and yr is a binary label. yr = 1 if tr is a DD and yr = 0 if otherwise. Let f(tr; Ws) denote the sequential LSTM (SLSTM) encoder parameterized by Ws. This encoder is shared by the DDs and CDs. Note that for CDs, a tree LSTM is further applied on top of the encodings produced by the SLSTM. We use the SLSTM encoding vectors of CDs as the input of the discriminative network rather than using the TLSTM encodings since the latter are irrelevant to writing styles. Let g(f(tr; Ws); Wd) denote the discriminative network parameterized by Wd. It takes the encoding vector f(tr; Ws) as input and produces the probability that tr is a DD. Adversarial learning is performed by solving this problem: max Ws min Wd Ladv = R X r=1 CE(g(f(tr; Ws); Wd), yr) (5) The discriminative network tries to differentiate DDs from CDs by minimizing this classification loss while the encoder maximizes this loss so that DDs and CDs are not distinguishable. 4.5 Isotonic Constraints Next, we incorporate the importance order among ICD codes. For the D(l) codes assigned to patient l, without loss of generality, we assume the order is 1 ≻2 · · · ≻D(l) (the order is given by human coders as ground-truth in the MIMIC-III dataset). We use the predicted probability pi (1 ≤i ≤D(l)) defined in Section 4.3 to characterize the importance of code i. To incorporate the order, we impose an isotonic constraint on the probabilities: p(l) 1 ≻p(l) 2 · · · ≻p(l) D(l), and solve the following problem: minΘ Lpred(Θ) + maxWd(−λLadv(Ws, Wd)) s.t. p(l) 1 ≻p(l) 2 · · · ≻p(l) D(l) ∀l = 1, · · · , L (6) where the probabilities p(l) i are functions of Θ and λ is a tradeoff parameter. We develop an algorithm based on the alternating direction method of multiplier (ADMM) (Boyd et al., 2011) to solve the problem defined in Eq.(6). Let p(l) be a |D(l)|-dimensional vector where the i-th element is p(l) i . We first write the problem into an equivalent form minΘ Lpred(Θ) + maxWd(−λLadv(Ws, Wd)) s.t. p(l) = q(l) q(l) 1 ≻q(l) 2 · · · ≻q(l) |D(l)| ∀l = 1, · · · , L (7) Then we write down the augmented Lagrangian min Θ,q,v Lpred(Θ) + maxWd(−λLadv(Ws, Wd)) +⟨p(l) −q(l), v(l)⟩+ ρ 2∥p(l) −q(l)∥2 2 s.t. q(l) 1 ≻q(l) 2 · · · ≻q(l) |D(l)| ∀l = 1, · · · , L (8) We solve this problem by alternating between {p(l)}L l=1, {q(l)}L l=1 and {v(l)}L l=1 The subproblem defined over q(l) is minq(l) −⟨q(l), v(l)⟩+ ρ 2∥p(l) −q(l)∥2 2 s.t. q(l) 1 ≻q(l) 2 · · · ≻q(l) |D(l)| (9) which is an isotonic projection problem and can be solved via the algorithm proposed in (Yu and Xing, 2016). With {q(l)}L l=1 and {v(l)}L l=1 fixed, the sub-problem is minΘ Lpred(Θ) + maxWd(−λLadv(Ws, Wd)) which can be solved using stochastic gradient descent (SGD). The update of v(l) is simple: v(l) = v(l) + ρ(p(l) −q(l)). 5 Experiments In this section, we present experiment results. 5.1 Experimental Settings Out of the 6,984 unique codes, we selected 2,833 codes that have the top frequencies to perform the study. We split the data into a train/validation/test dataset with 40k/7k/12k patient visits respectively. The hyperparameters were tuned on the validation set. The SLSTMs were bidirectional and dropout with 0.5 probability (Srivastava et al., 2014) was used. The size of hidden states in all LSTMs was set to 100. The word embeddings were trained on the fly and their dimension was set to 200. The tradeoff parameter λ was set to 0.1. The parameter ρ in the ADMM algorithm was set to 1. In the SGD algorithm for solving minΘ Lpred(Θ)+maxWd(−λLadv(Ws, Wd)), we used the ADAM (Kingma and Ba, 2014) optimizer with an initial learning rate 0.001 and a minibatch size 20. Sensitivity (true positive rate) and 1072 specificity (true negative rate) were used to evaluate the code assignment performance. We calculated these two scores for each individual code on the test set, then took a weighted (proportional to codes’ frequencies) average across all codes. To evaluate the ranking performance of codes, we used normalized discounted cumulative gain (NDCG) (J¨arvelin and Kek¨al¨ainen, 2002). 5.2 Ablation Study We perform ablation study to verify the effectiveness of each module in our model. To evaluate module X, we remove it from the model without changing other modules and denote such a baseline by No-X. The comparisons of No-X with the full model are given in Table 2. Tree-of-sequences LSTM To evaluate this module, we compared with the two configurations: (1) No-TLSTM, which removes the tree LSTM and directly uses the hidden states produced by the sequential LSTM as final representations of codes; (2) Bottom-up TLSTM, which removes the hidden states generated by the top-down TLSTM. In addition, we compared with four hierarchical classification baselines including (1) hierarchical network (HierNet) (Yan et al., 2015), (2) HybridNet (Hou et al., 2017), (3) branch network (BranchNet) (Zhu and Bain, 2017), (4) label embedding tree (LET) (Bengio et al., 2010), by using them to replace the bidirectional tree LSTM while keeping other modules untouched. Table 2 shows the average sensitivity and specificity scores achieved by these methods on the test set. We make the following observations. First, removing tree LSTM largely degrades performance: the sensitivity and specificity of No-TLSTM is 0.23 and 0.28 respectively while our full model (which uses bidirectional TLSTM) achieves 0.29 and 0.33 respectively. The reason is No-TLSTM ignores the hierarchical relationship among codes. Second, bottom-up tree LSTM alone performs less well than bidirectional tree LSTM. This demonstrates the necessity of the top-down TLSTM, which ensures every two codes are connected by directed paths and can more expressively capture code-relations in the hierarchy. Third, our method outperforms the four baselines. The possible reason is our method directly builds codes’ hierarchical relationship into their representations while the baselines perform representation-learning and relationship-capturing Sensitivity Specificity (Larkey and Croft, 1996) 0.15 0.17 (Franz et al., 2000) 0.19 0.21 (Pestian et al., 2007) 0.12 0.21 (Kavuluru et al., 2013) 0.09 0.11 (Kavuluru et al., 2015) 0.21 0.25 (Koopman et al., 2015) 0.18 0.20 LET 0.23 0.29 HierNet 0.26 0.30 HybridNet 0.25 0.31 BranchNet 0.25 0.29 No-TLSTM 0.23 0.28 Bottom-up TLSTM 0.27 0.31 No-AL 0.26 0.31 No-IC 0.24 0.29 No-AM 0.27 0.29 Our full model 0.29 0.33 Table 2: Sensitivity and Specificity on the Test Set separately. Next, we present some qualitative results. For a patient (admission ID 147798) having a DD ‘E Coli urinary tract infection’, without using tree LSTM, two sibling codes 585.2 (chronic kidney disease, stage II (mild)) – which is the groundtruth – and 585.4 (chronic kidney disease, stage IV (severe)) are simultaneously assigned possibly because their textual descriptions are very similar (only differ in the level of severity). This is incorrect because 585.2 and 585.4 are the children of 585 (chronic kidney disease) and the severity level of this disease cannot simultaneously be mild and severe. After tree LSTM is added, the false prediction of 585.4 is eliminated, which demonstrates the effectiveness of tree LSTM in incorporating one constraint induced by the code hierarchy: among the nodes sharing the same parent, only one should be selected. For patient 197205, No-TLSTM assigns the following codes: 462 (subacute sclerosing panencephalitis), 790.29 (other abnormal glucose), 799.9 (unspecified viral infection), and 285.21 (anemia in chronic kidney disease). Among these codes, the first three are ground-truth and the fourth one is incorrect (the ground-truth is 401.9 (unspecified essential hypertension)). Adding tree LSTM fixes this error. The average distance between 401.9 and the rest of ground-truth codes is 6.2. For the incorrectly assigned code 285.21, such a distance is 7.9. This demonstrates that tree LSTM is able to capture another constraint imposed by the hierarchy: codes with smaller treedistance are more likely to be assigned together. 1073 Position 2 4 6 8 No-IC 0.27 0.26 0.23 0.20 IC 0.32 0.29 0.27 0.23 Table 3: Comparison of NDCG Scores in the Ablation Study of Isotonic Constraints. Adversarial learning To evaluate the efficacy of adversarial learning (AL), we remove it from the full model and refer to this baseline as No-AL. Specifically, in Eq.(6), the loss term maxWd(−Ladv(Ws, Wd)) is taken away. Table 2 shows the results, from which we observe that after AL is removed, the sensitivity and specificity are dropped from 0.29 and 0.33 to 0.26 and 0.31 respectively. No-AL does not reconcile different writing styles of diagnosis descriptions (DDs) and code descriptions (CDs). As a result, a DD and a CD that have similar semantics may be mismatched because their writing styles are different. For example, a patient (admission ID 147583) has a DD ‘h/o DVT on anticoagulation’, which contains abbreviation DVT (deep vein thrombosis). Due to the presence of this abbreviation, it is difficult to assign a proper code to this DD since the textual descriptions of codes do not contain abbreviations. With adversarial learning, our model can correctly map this DD to a ground-truth code: 443.9 (peripheral vascular disease, unspecified). Without AL, this code is not selected. As another example, a DD ‘coronary artery disease, STEMI, s/p 2 stents placed in RCA’ was given to patient 148532. This DD is written informally and ungrammatically, and contains too much detailed information, e.g., ‘s/p 2 stents placed in RCA’. Such a writing style is quite different from that of CDs. With AL, our model successfully matches this DD to a ground-truth code: 414.01 (coronary atherosclerosis of native coronary artery). On the contrary, No-AL fails to achieve this. Isotonic constraint (IC) To evaluate this ingredient, we remove the ICs from Eq.(6) during training and denote this baseline as No-IC. We use NDCG to measure the ranking performance, which is calculated in the following way. Consider a testing patient-visit l where the ground-truth ICD codes are M(l). For any code c, we define the relevance score of c to l as 0 if c /∈M(l) and as |M(l)| −r(c) if otherwise, where r(c) is the ground-truth rank of c in M(l). We rank codes in descending order of their corresponding prediction probabilities and obtain the predicted rank for each code. We calculate the NDCG scores at position 2, 4, 6, 8 based on the relevance scores and predicted ranks, which are shown in Table 3. As can be seen, using IC achieves much higher NDCG than NoIC, which demonstrates the effectiveness of IC in capturing the importance order among codes. We also evaluate how IC affects the sensitivity and specificity of code assignment. As can be seen from Table 2, No-IC degrades the two scores from 0.29 and 0.33 to 0.24 and 0.29 respectively, which indicates that IC is helpful in training a model that can more correctly assign codes. This is because IC encourages codes that are highly relevant to the patients to be ranked at top positions, which prevents the selection of irrelevant codes. Attentional matching (AM) In the evaluation of this module, we compare with a baseline – No-AM, which performs an unweighted average of the M DDs: bhn = 1 M PM m=1 hm, concatenates bhn with un and feeds the concatenated vector into the final prediction layer. From Table 2, we can see our full model (with AM) outperforms No-AM, which demonstrates the effectiveness of attentional matching. In determining whether a code should be assigned, different DDs have different importance weights. No-AM ignores such weights, therefore performing less well. AM can correctly perform many-to-one mapping from multiple DDs to a CD. For example, patient 190236 was given two DDs: ‘renal insufficiency’ and ‘acute renal failure’. AM maps them to a combined ICD code: 403.91 (hypertensive chronic kidney disease, unspecified, with chronic kidney disease stage V or end stage renal disease), which is in the ground-truth provided by medical coders. On the contrary, No-AM fails to assign this code. On the other hand, AM is able to correctly map a DD to multiple CDs. For example, a DD ‘congestive heart failure, diastolic’ was given to patient 140851. AM successfully maps this DD to two codes: (1) 428.0 (congestive heart failure, unspecified); (2) 428.30 (diastolic heart failure, unspecified). Without AM, this DD is mapped only to 428.0. 5.3 Holistic Comparison with Other Baselines In addition to evaluating the four modules individually, we also compared our full model with four other baselines proposed by (Larkey and Croft, 1074 1996; Franz et al., 2000; Pestian et al., 2007; Kavuluru et al., 2013, 2015; Koopman et al., 2015) for ICD coding. Table 2 shows the results. As can be seen, our approach achieves much better sensitivity and specificity scores. The reason that our model works better is two-fold. First, our model is based on deep neural network, which has arguably better modeling power than linear methods used in the baselines. Second, our model is able to capture the hierarchical relationship and importance order among codes, can alleviate the discrepancy in writing styles and allows flexible many-toone and one-to-many mappings from DDs to CDs. These merits are not possessed by the baselines. 6 Conclusions and Discussions In this paper, we build a neural network model for automated ICD coding. Evaluations on the MIMIC-III dataset demonstrate the following. First, the tree-of-sequences LSTM network effectively discourages the co-selection of sibling codes and promotes the co-assignment of clinicallyrelevant codes. Adversarial learning improves the matching accuracy by alleviating the discrepancy among the writing styles of DDs and CDs. Third, isotonic constraints promote the correct ranking of codes. Fourth, the attentional matching mechanism is able to perform many-to-one and one-tomany mappings. In the coding practice of human coders, in addition to the diagnosis descriptions, other information contained in nursing notes, lab values, and medical procedures are also leveraged for code assignment. We have initiated preliminary investigation along this line and added two new input sources: (1) the rest of discharge summary and (2) lab values. The sensitivity is improved from 0.29 to 0.32 and the specificity is improved from 0.33 to 0.35. A full study is ongoing. At present, the major limitations of this work include: (1) it does not perform well on infrequent codes; (2) it is less capable of dealing with abbreviations. We will address these two issues in future by investigating diversity-promoting regularization (Xie et al., 2017) and leveraging an external knowledge base that maps medical abbreviations into their full names. The proposed methods can be applied to other tasks in NLP. The tree-of-sequences model can be applied for ontology annotation. It takes the textual descriptions of concepts in the ontology and their hierarchical structure as inputs and produces a latent representation for each concept. The representations can simultaneously capture the semantics of codes and their relationships. The proposed adversarial reconciliation of writing styles and attentional matching can be applied for knowledge mapping or entity linking. For example, in tweets, we can use the method to map an informally written mention ‘nbcbightlynews’ to a canonical entity ‘NBC Nightly News’ in the knowledge base. Acknowledgements We would like to thank the anonymous reviewers for their very constructive and helpful comments and suggestions. Pengtao Xie and Eric P. Xing are supported by National Institutes of Health P30DA035778, Pennsylvania Department of Health BD4BH4100070287, and National Science Foundation IIS1617583. References Dzmitry Bahdanau, Kyunghyun Cho, and Yoshua Bengio. 2014. Neural machine translation by jointly learning to align and translate. arXiv preprint arXiv:1409.0473. Samy Bengio, Jason Weston, and David Grangier. 2010. Label embedding trees for large multi-class tasks. In NIPS. Stephen Boyd, Neal Parikh, Eric Chu, Borja Peleato, Jonathan Eckstein, et al. 2011. Distributed optimization and statistical learning via the alternating direction method of multipliers. Foundations and Trends R⃝in Machine Learning, 3(1):1–122. Jeff Donahue, Philipp Kr¨ahenb¨uhl, and Trevor Darrell. 2016. Adversarial feature learning. arXiv preprint arXiv:1605.09782. Rich´ard Farkas and Gy¨orgy Szarvas. 2008. Automatic construction of rule-based icd-9-cm coding systems. BMC bioinformatics, 9(3):S10. Pius Franz, Albrecht Zaiss, Stefan Schulz, Udo Hahn, and R¨udiger Klar. 2000. Automated coding of diagnoses–three methods compared. In Proceedings of the AMIA Symposium, page 250. American Medical Informatics Association. Yaroslav Ganin and Victor Lempitsky. 2015. Unsupervised domain adaptation by backpropagation. In International Conference on Machine Learning, pages 1180–1189. Ian Goodfellow, Jean Pouget-Abadie, Mehdi Mirza, Bing Xu, David Warde-Farley, Sherjil Ozair, Aaron 1075 Courville, and Yoshua Bengio. 2014. Generative adversarial nets. In Advances in neural information processing systems, pages 2672–2680. Sepp Hochreiter and J¨urgen Schmidhuber. 1997. Long short-term memory. Neural computation, 9(8):1735–1780. Saihui Hou, Yushan Feng, and Zilei Wang. 2017. Vegfru: A domain-specific dataset for fine-grained visual categorization. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 541–549. Kalervo J¨arvelin and Jaana Kek¨al¨ainen. 2002. Cumulated gain-based evaluation of ir techniques. ACM Transactions on Information Systems (TOIS), 20(4):422–446. Alistair EW Johnson, Tom J Pollard, Lu Shen, Liwei H Lehman, Mengling Feng, Mohammad Ghassemi, Benjamin Moody, Peter Szolovits, Leo Anthony Celi, and Roger G Mark. 2016. Mimic-iii, a freely accessible critical care database. Scientific data, 3. Ramakanth Kavuluru, Sifei Han, and Daniel Harris. 2013. Unsupervised extraction of diagnosis codes from emrs using knowledge-based and extractive text summarization techniques. In Canadian conference on artificial intelligence, pages 77–88. Springer. Ramakanth Kavuluru, Anthony Rios, and Yuan Lu. 2015. An empirical evaluation of supervised learning approaches in assigning diagnosis codes to electronic medical records. Artificial intelligence in medicine, 65(2):155–166. Diederik Kingma and Jimmy Ba. 2014. Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980. Bevan Koopman, Guido Zuccon, Anthony Nguyen, Anton Bergheim, and Narelle Grayson. 2015. Automatic icd-10 classification of cancers from free-text death certificates. International journal of medical informatics, 84(11):956–965. Dee Lang. 2007. Consultant report-natural language processing in the health care industry. Cincinnati Children’s Hospital Medical Center, Winter. Leah S Larkey and W Bruce Croft. 1996. Combining classifiers in text categorization. In Proceedings of the 19th annual international ACM SIGIR conference on Research and development in information retrieval, pages 289–297. ACM. Kimberly J O’malley, Karon F Cook, Matt D Price, Kimberly Raiford Wildes, John F Hurdle, and Carol M Ashton. 2005. Measuring diagnoses: Icd code accuracy. Health services research, 40(5p2):1620–1639. World Health Organization et al. 1978. International classification of diseases:[9th] ninth revision, basic tabulation list with alphabetic index. World Health Organization. John P Pestian, Christopher Brew, Paweł Matykiewicz, Dj J Hovermale, Neil Johnson, K Bretonnel Cohen, and Włodzisław Duch. 2007. A shared task involving multi-label classification of clinical free text. In Proceedings of the Workshop on BioNLP 2007: Biological, Translational, and Clinical Language Processing, pages 97–104. Association for Computational Linguistics. Minjoon Seo, Aniruddha Kembhavi, Ali Farhadi, and Hannaneh Hajishirzi. 2016. Bidirectional attention flow for machine comprehension. arXiv preprint arXiv:1611.01603. Joanna E Sheppard, Laura CE Weidner, Saher Zakai, Simon Fountain-Polley, and Judith Williams. 2008. Ambiguous abbreviations: an audit of abbreviations in paediatric note keeping. Archives of disease in childhood, 93(3):204–206. Nitish Srivastava, Geoffrey E Hinton, Alex Krizhevsky, Ilya Sutskever, and Ruslan Salakhutdinov. 2014. Dropout: a simple way to prevent neural networks from overfitting. Journal of machine learning research, 15(1):1929–1958. Kai Sheng Tai, Richard Socher, and Christopher D Manning. 2015. Improved semantic representations from tree-structured long short-term memory networks. arXiv preprint arXiv:1503.00075. Zhiyang Teng and Yue Zhang. 2016. Bidirectional tree-structured lstm with head lexicalization. arXiv preprint arXiv:1611.06788. Pengtao Xie, Aarti Singh, and Eric P. Xing. 2017. Uncorrelation and evenness: a new diversity-promoting regularizer. In Proceedings of the 34th International Conference on Machine Learning, pages 3811– 3820. Pengtao Xie and Eric Xing. 2017. A constituentcentric neural architecture for reading comprehension. In Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), volume 1, pages 1405–1414. Kelvin Xu, Jimmy Ba, Ryan Kiros, Kyunghyun Cho, Aaron C Courville, Ruslan Salakhutdinov, Richard S Zemel, and Yoshua Bengio. 2015. Show, attend and tell: Neural image caption generation with visual attention. Zhicheng Yan, Hao Zhang, Robinson Piramuthu, Vignesh Jagadeesh, Dennis DeCoste, Wei Di, and Yizhou Yu. 2015. Hd-cnn: hierarchical deep convolutional neural networks for large scale visual recognition. In Proceedings of the IEEE International Conference on Computer Vision, pages 2740–2748. 1076 Zichao Yang, Diyi Yang, Chris Dyer, Xiaodong He, Alex Smola, and Eduard Hovy. 2016. Hierarchical attention networks for document classification. In Proceedings of the 2016 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 1480–1489. Lantao Yu, Weinan Zhang, Jun Wang, and Yong Yu. 2017. Seqgan: Sequence generative adversarial nets with policy gradient. In AAAI. Yao-Liang Yu and Eric P Xing. 2016. Exact algorithms for isotonic regression and related. In Journal of Physics: Conference Series, volume 699, page 012016. IOP Publishing. Xinqi Zhu and Michael Bain. 2017. B-cnn: Branch convolutional neural network for hierarchical classification. arXiv preprint arXiv:1709.09890.
2018
98
Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Long Papers), pages 1077–1087 Melbourne, Australia, July 15 - 20, 2018. c⃝2018 Association for Computational Linguistics 1077 Domain Adaptation with Adversarial Training and Graph Embeddings Firoj Alam∗, Shafiq Joty†, and Muhammad Imran∗ Qatar Computing Research Institute, HBKU, Qatar∗ School of Computer Science and Engineering† Nanyang Technological University, Singapore† {fialam, mimran}@hbku.edu.qa∗ [email protected]† Abstract The success of deep neural networks (DNNs) is heavily dependent on the availability of labeled data. However, obtaining labeled data is a big challenge in many real-world problems. In such scenarios, a DNN model can leverage labeled and unlabeled data from a related domain, but it has to deal with the shift in data distributions between the source and the target domains. In this paper, we study the problem of classifying social media posts during a crisis event (e.g., Earthquake). For that, we use labeled and unlabeled data from past similar events (e.g., Flood) and unlabeled data for the current event. We propose a novel model that performs adversarial learning based domain adaptation to deal with distribution drifts and graph based semi-supervised learning to leverage unlabeled data within a single unified deep learning framework. Our experiments with two real-world crisis datasets collected from Twitter demonstrate significant improvements over several baselines. 1 Introduction The application that motivates our work is the time-critical analysis of social media (Twitter) data at the sudden-onset of an event like natural or man-made disasters (Imran et al., 2015). In such events, affected people post timely and useful information of various types such as reports of injured or dead people, infrastructure damage, urgent needs (e.g., food, shelter, medical assistance) on these social networks. Humanitarian organizations believe timely access to this important information from social networks can help significantly and reduce both human loss and economic damage (Varga et al., 2013; Vieweg et al., 2014; Power et al., 2013). In this paper, we consider the basic task of classifying each incoming tweet during a crisis event (e.g., Earthquake) into one of the predefined classes of interest (e.g., relevant vs. nonrelevant) in real-time. Recently, deep neural networks (DNNs) have shown great performance in classification tasks in NLP and data mining. However the success of DNNs on a task depends heavily on the availability of a large labeled dataset, which is not a feasible option in our setting (i.e., classifying tweets at the onset of an Earthquake). On the other hand, in most cases, we can have access to a good amount of labeled and abundant unlabeled data from past similar events (e.g., Floods) and possibly some unlabeled data for the current event. In such situations, we need methods that can leverage the labeled and unlabeled data in a past event (we refer to this as a source domain), and that can adapt to a new event (we refer to this as a target domain) without requiring any labeled data in the new event. In other words, we need models that can do domain adaptation to deal with the distribution drift between the domains and semi-supervised learning to leverage the unlabeled data in both domains. Most recent approaches to semi-supervised learning (Yang et al., 2016) and domain adaptation (Ganin et al., 2016) use the automatic feature learning capability of DNN models. In this paper, we extend these methods by proposing a novel model that performs domain adaptation and semi-supervised learning within a single unified deep learning framework. In this framework, the basic task-solving network (a convolutional neural network in our case) is put together with two other networks – one for semi-supervised learning and the other for domain adaptation. The semisupervised component learns internal representa1078 tions (features) by predicting contextual nodes in a graph that encodes similarity between labeled and unlabeled training instances. The domain adaptation is achieved by training the feature extractor (or encoder) in adversary with respect to a domain discriminator, a binary classifier that tries to distinguish the domains. The overall idea is to learn high-level abstract representation that is discriminative for the main classification task, but is invariant across the domains. We propose a stochastic gradient descent (SGD) algorithm to train the components of our model simultaneously. The evaluation of our proposed model is conducted using two Twitter datasets on scenarios where there is only unlabeled data in the target domain. Our results demonstrate the following. 1. When the network combines the semisupervised component with the supervised component, depending on the amount of labeled data used, it gives 5% to 26% absolute gains in F1 compared to when it uses only the supervised component. 2. Domain adaptation with adversarial training improves over the adaptation baseline (i.e., a transfer model) by 1.8% to 4.1% absolute F1. 3. When the network combines domain adversarial training with semi-supervised learning, we get further gains ranging from 5% to 7% absolute in F1 across events. Our source code is available on Github1 and the data is available on CrisisNLP2. The rest of the paper is organized as follows. In Section 2, we present the proposed method, i.e., domain adaptation and semi-supervised graph embedding learning. In Section 3, we present the experimental setup and baselines. The results and analysis are presented in Section 4. In Section 5, we present the works relevant to this study. Finally, conclusions appear in Section 6. 2 The Model We demonstrate our approach for domain adaptation with adversarial training and graph embedding on a tweet classification task to support crisis response efforts. Let Dl S = {ti, yi}Ls i=1 and Du S = {ti}Us i=1 be the set of labeled and unlabeled tweets for a source crisis event S (e.g., 1https://github.com/firojalam/ domain-adaptation 2http://crisisnlp.qcri.org Nepal earthquake), where yi ∈{1, . . . , K} is the class label for tweet ti, Ls and Us are the number of labeled and unlabeled tweets for the source event, respectively. In addition, we have unlabeled tweets Du T = {ti}Ut i=1 for a target event T (e.g., Queensland flood) with Ut being the number of unlabeled tweets in the target domain. Our ultimate goal is to train a cross-domain model p(y|t, θ) with parameters θ that can classify any tweet in the target event T without having any information about class labels in T. Figure 1 shows the overall architecture of our neural model. The input to the network is a tweet t = (w1, . . . , wn) containing words that come from a finite vocabulary V defined from the training set. The first layer of the network maps each of these words into a distributed representation Rd by looking up a shared embedding matrix E ∈ R|V|×d. We initialize the embedding matrix E in our network with word embeddings that are pretrained on a large crisis dataset (Subsection 2.5). However, embedding matrix E can also be initialize randomly. The output of the look-up layer is a matrix X ∈Rn×d, which is passed through a number of convolution and pooling layers to learn higher-level feature representations. A convolution operation applies a filter u ∈Rk.d to a window of k vectors to produce a new feature ht as ht = f(u.Xt:t+k−1) (1) where Xt:t+k−1 is the concatenation of k look-up vectors, and f is a nonlinear activation; we use rectified linear units or ReLU. We apply this filter to each possible k-length windows in X with stride size of 1 to generate a feature map hj as: hj = [h1, . . . , hn+k−1] (2) We repeat this process N times with N different filters to get N different feature maps. We use a wide convolution (Kalchbrenner et al., 2014), which ensures that the filters reach the entire tweet, including the boundary words. This is done by performing zero-padding, where out-ofrange (i.e., t<1 or t>n) vectors are assumed to be zero. With wide convolution, o zero-padding size and 1 stride size, each feature map contains (n + 2o −k + 1) convoluted features. After the convolution, we apply a max-pooling operation to each of the feature maps, m = [µp(h1), · · · , µp(hN)] (3) 1079 ! ! ! ! ! ! ! Softmax Dense (z) Max pooling Convolution Pre-trained Word Embeddings w1 w2 wn-1 wn Input tweet Feature map Softmax Class label Graph context Dense (zg) Dense (zc) Dense (zs) Sigmoid Supervised loss LC Semi-supervised Domain adversary loss LD Gradient reversal ∂LD ∂Ψ Shared Components ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! loss LG −λd ∂LD ∂Λ Dense (zd) ! ! ! Domain label Figure 1: The system architecture of the domain adversarial network with graph-based semi-supervised learning. The shared components part is shared by supervised, semi-supervised and domain classifier. where µp(hj) refers to the max operation applied to each window of p features with stride size of 1 in the feature map hi. Intuitively, the convolution operation composes local features into higherlevel representations in the feature maps, and maxpooling extracts the most important aspects of each feature map while reducing the output dimensionality. Since each convolution-pooling operation is performed independently, the features extracted become invariant in order (i.e., where they occur in the tweet). To incorporate order information between the pooled features, we include a fully-connected (dense) layer z = f(V m) (4) where V is the weight matrix. We choose a convolutional architecture for feature composition because it has shown impressive results on similar tasks in a supervised setting (Nguyen et al., 2017). The network at this point splits into three branches (shaded with three different colors in Figure 1) each of which serves a different purpose and contributes a separate loss to the overall loss of the model as defined below: L(Λ, Φ, Ω, Ψ) = LC(Λ, Φ) + λgLG(Λ, Ω) + λdLD(Λ, Ψ) (5) where Λ = {U, V } are the convolutional filters and dense layer weights that are shared across the three branches. The first component LC(Λ, Φ) is a supervised classification loss based on the labeled data in the source event. The second component LG(Λ, Ω) is a graph-based semi-supervised loss that utilizes both labeled and unlabeled data in the source and target events to induce structural similarity between training instances. The third component LD(Λ, Ω) is an adversary loss that again uses all available data in the source and target domains to induce domain invariance in the learned features. The tunable hyperparameters λg and λd control the relative strength of the components. 2.1 Supervised Component The supervised component induces label information (e.g., relevant vs. non-relevant) directly in the network through the classification loss LC(Λ, Φ), which is computed on the labeled instances in the source event, Dl S. Specifically, this branch of the network, as shown at the top in Figure 1, takes the shared representations z as input and pass it through a task-specific dense layer zc = f(Vcz) (6) where Vc is the corresponding weight matrix. The activations zc along with the activations from the semi-supervised branch zs are used for classification. More formally, the classification layer defines a Softmax p(y = k|t, θ) = exp W T k [zc; zs]  P k′ exp W T k′ [zc; zs]  (7) where [.; .] denotes concatenation of two column vectors, Wk are the class weights, and θ = {U, V, Vc, W} defines the relevant parameters for this branch of the network with Λ = {U, V } being the shared parameters and Φ = {Vc, W} being the parameters specific to this branch. Once learned, 1080 we use θ for prediction on test tweets. The classification loss LC(Λ, Φ) (or LC(θ)) is defined as LC(Λ, Φ) = −1 Ls Ls X i=1 I(yi = k) log p(yi = k|ti, Λ, Φ) (8) where I(.) is an indicator function that returns 1 when the argument is true, otherwise it returns 0. 2.2 Semi-supervised Component The semi-supervised branch (shown at the middle in Figure 1) induces structural similarity between training instances (labeled or unlabeled) in the source and target events. We adopt the recently proposed graph-based semi-supervised deep learning framework (Yang et al., 2016), which shows impressive gains over existing semisupervised methods on multiple datasets. In this framework, a “similarity” graph G first encodes relations between training instances, which is then used by the network to learn internal representations (i.e., embeddings). 2.2.1 Learning Graph Embeddings The semi-supervised branch takes the shared representation z as input and learns internal representations by predicting a node in the graph context of the input tweet. Following (Yang et al., 2016), we use negative sampling to compute the loss for predicting the context node, and we sample two types of contextual nodes: (i) one is based on the graph G to encode structural information, and (ii) the second is based on the labels in Dl S to incorporate label information through this branch of the network. The ratio of positive and negative samples is controlled by a random variable ρ1 ∈(0, 1), and the proportion of the two context types is controlled by another random variable ρ2 ∈(0, 1); see Algorithm 1 of (Yang et al., 2016) for details on the sampling procedure. Let (j, γ) is a tuple sampled from the distribution p(j, γ|i, Dl S, G), where j is a context node of an input node i and γ ∈{+1, −1} denotes whether it is a positive or a negative sample; γ = +1 if ti and tj are neighbors in the graph (for graph-based context) or they both have same labels (for label-based context), otherwise γ = −1. The negative log loss for context prediction LG(Λ, Ω) can be written as LG(Λ, Ω) = − 1 Ls + Us Ls+Us X i=1 E(j,γ) log σ  γCT j zg(i)  (9) where zg(i) = f(Vgz(i)) defines another dense layer (marked as Dense (zg) in Figure 1) having weights Vg, and Cj is the weight vector associated with the context node tj. Note that here Λ = {U, V } defines the shared parameters and Ω= {Vg, C} defines the parameters specific to the semi-supervised branch of the network. 2.2.2 Graph Construction Typically graphs are constructed based on a relational knowledge source, e.g., citation links in (Lu and Getoor, 2003), or distance between instances (Zhu, 2005). However, we do not have access to such a relational knowledge in our setting. On the other hand, computing distance between n(n−1)/2 pairs of instances to construct the graph is also very expensive (Muja and Lowe, 2014). Therefore, we choose to use k-nearest neighborbased approach as it has been successfully used in other study (Steinbach et al., 2000). The nearest neighbor graph consists of n vertices and for each vertex, there is an edge set consisting of a subset of n instances, i.e., tweets in our training set. The edge is defined by the distance measure d(i, j) between tweets ti and tj, where the value of d represents how similar the two tweets are. We used k-d tree data structure (Bentley, 1975) to efficiently find the nearest instances. To construct the graph, we first represent each tweet by averaging the word2vec vectors of its words, and then we measure d(i, j) by computing the Euclidean distance between the vectors. The number of nearest neighbor k was set to 10. The reason of averaging the word vectors is that it is computationally simpler and it captures the relevant semantic information for our task in hand. Likewise, we choose to use Euclidean distance instead of cosine for computational efficiency. 2.3 Domain Adversarial Component The network described so far can learn abstract features through convolutional and dense layers that are discriminative for the classification task (relevant vs. non-relevant). The supervised branch of the network uses labels in the source event to induce label information directly, whereas the semi-supervised branch induces similarity information between labeled and unlabeled instances. However, our goal is also to make these learned features invariant across domains or events (e.g., Nepal Earthquake vs. Queensland Flood). We achieve this by domain adversarial training of 1081 neural networks (Ganin et al., 2016). We put a domain discriminator, another branch in the network (shown at the bottom in Figure 1) that takes the shared internal representation z as input, and tries to discriminate between the domains of the input — in our case, whether the input tweet is from DS or from DT . The domain discriminator is defined by a sigmoid function: ˆδ = p(d = 1|t, Λ, Ψ) = sigm(wT d zd) (10) where d ∈{0, 1} denotes the domain of the input tweet t, wd are the final layer weights of the discriminator, and zd = f(Vdz) defines the hidden layer of the discriminator with layer weights Vd. Here Λ = {U, V } defines the shared parameters, and Ψ = {Vd, wd} defines the parameters specific to the domain discriminator. We use the negative log-probability as the discrimination loss: Ji(Λ, Ψ) = −di log ˆδ −(1 −di) log  1 −ˆδ  (11) We can write the overall domain adversary loss over the source and target domains as LD(Λ, Ψ) = − 1 Ls + Us Ls+Us X i=1 Ji(Λ, Ψ) −1 Ut Ut X i=1 Ji(Λ, Ψ) (12) where Ls + Us and Ut are the number of training instances in the source and target domains, respectively. In adversarial training, we seek parameters (saddle point) such that θ∗= argmin Λ,Φ,Ω max Ψ L(Λ, Φ, Ω, Ψ) (13) which involves a maximization with respect to Ψ and a minimization with respect to {Λ, Φ, Ω}. In other words, the updates of the shared parameters Λ = {U, V } for the discriminator work adversarially to the rest of the network, and vice versa. This is achieved by reversing the gradients of the discrimination loss LD(Λ, Ψ), when they are backpropagated to the shared layers (see Figure 1). 2.4 Model Training Algorithm 1 illustrates the training algorithm based on stochastic gradient descent (SGD). We first initialize the model parameters. The word embedding matrix E is initialized with pre-trained word2vec vectors (see Subsection 2.5) and is kept fixed during training.3 Other parameters are initialized with small random numbers sampled from 3Tuning E on our task by backpropagation increased the training time immensely (3 days compared to 5 hours on a Tesla GPU) without any significant performance gain. Algorithm 1: Model Training with SGD Input : data Dl S, Du S, Du T ; graph G Output: learned parameters θ = {Λ, Φ} 1. Initialize model parameters {E, Λ, Φ, Ω, Ψ}; 2. repeat // Semi-supervised for each batch sampled from p(j, γ|i, Dl S, G) do a) Compute loss LG(Λ, Ω) b) Take a gradient step for LG(Λ, Ω); end // Supervised & domain adversary for each batch sampled from Dl S do a) Compute LC(Λ, Φ) and LD(Λ, Ψ) b) Take gradient steps for LC(Λ, Φ) and LD(Λ, Ψ); end // Domain adversary for each batch sampled from Du S and Du T do a) Compute LD(Λ, Ψ) b) Take a gradient step for LD(Λ, Ψ); end until convergence; a uniform distribution (Bengio and Glorot, 2010). We use AdaDelta (Zeiler, 2012) adaptive update to update the parameters. In each iteration, we do three kinds of gradient updates to account for the three different loss components. First, we do an epoch over all the training instances updating the parameters for the semi-supervised loss, then we do an epoch over the labeled instances in the source domain, each time updating the parameters for the supervised and the domain adversary losses. Finally, we do an epoch over the unlabeled instances in the two domains to account for the domain adversary loss. The main challenge in adversarial training is to balance the competing components of the network. If one component becomes smarter than the other, its loss to the shared layer becomes useless, and the training fails to converge (Arjovsky et al., 2017). Equivalently, if one component becomes weaker, its loss overwhelms that of the other, causing the training to fail. In our experiments, we observed the domain discriminator is weaker than the rest of the network. This could be due to the noisy nature of tweets, which makes the job for the domain discriminator harder. To balance the components, we would want the error signals from the discriminator to be fairly weak, also we would want the supervised loss to have more impact than the semi-supervised loss. In our experiments, the weight of the domain adversary loss λd was fixed to 1e −8, and the weight of the semi-supervised loss λg was fixed to 1e −2. Other sophisticated weighting schemes have been proposed recently 1082 (Ganin et al., 2016; Arjovsky et al., 2017; Metz et al., 2016). It would be interesting to see how our model performs using these advanced tuning methods, which we leave as a future work. 2.5 Crisis Word Embedding As mentioned, we used word embeddings that are pre-trained on a crisis dataset. To train the wordembedding model, we first pre-processed tweets collected using the AIDR system (Imran et al., 2014) during different events occurred between 2014 and 2016. In the preprocessing step, we lowercased the tweets and removed URLs, digit, time patterns, special characters, single character, username started with the @ symbol. After preprocessing, the resulting dataset contains about 364 million tweets and about 3 billion words. There are several approaches to train word embeddings such as continuous bag-of-words (CBOW) and skip-gram models of wrod2vec (Mikolov et al., 2013), and Glove (Pennington et al., 2014). For our work, we trained the CBOW model from word2vec. While training CBOW, we filtered out words with a frequency less than or equal to 5, and we used a context window size of 5 and k = 5 negative samples. The resulting embedding model contains about 2 million words with vector dimensions of 300. 3 Experimental Settings In this section, we describe our experimental settings – datasets used, settings of our models, compared baselines, and evaluation metrics. 3.1 Datasets To conduct the experiment and evaluate our system, we used two real-world Twitter datasets collected during the 2015 Nepal earthquake (NEQ) and the 2013 Queensland floods (QFL). These datasets are comprised of millions of tweets collected through the Twitter streaming API4 using event-specific keywords/hashtags. To obtain the labeled examples for our task we employed paid workers from the Crowdflower5 – a crowdsourcing platform. The annotation consists of two classes relevant and non-relevant. For the annotation, we randomly sampled 11,670 and 10,033 tweets from the Nepal earthquake and the Queensland floods datasets, respectively. Given a 4https://dev.twitter.com/streaming/overview 5http://crowdflower.com Dataset Relevant Non-relevant Train Dev Test NEQ 5,527 6,141 7,000 1,167 3,503 QFL 5,414 4,619 6,019 1,003 3,011 Table 1: Distribution of labeled datasets for Nepal earthquake (NEQ) and Queensland flood (QFL). tweet, we asked crowdsourcing workers to assign the “relevant” label if the tweet conveys/reports information useful for crisis response such as a report of injured or dead people, some kind of infrastructure damage, urgent needs of affected people, donations requests or offers, otherwise assign the “non-relevant” label. We split the labeled data into 60% as training, 30% as test and 10% as development. Table 1 shows the resulting datasets with class-wise distributions. Data preprocessing was performed by following the same steps used to train the word2vec model (Subsection 2.5). In all the experiments, the classification task consists of two classes: relevant and non-relevant. 3.2 Model Settings and Baselines In order to demonstrate the effectiveness of our joint learning approach, we performed a series of experiments. To understand the contribution of different network components, we performed an ablation study showing how the model performs as a semi-supervised model alone and as a domain adaptation model alone, and then we compare them with the combined model that incorporates all the components. 3.2.1 Settings for Semi-supervised Learning As a baseline for the semi-supervised experiments, we used the self-training approach (Scudder, 1965). For this purpose, we first trained a supervised model using the CNN architecture (i.e., shared components followed by the supervised part in Figure 1). The trained model was then used to automatically label the unlabeled data. Instances with a classifier confidence score ≥0.75 were then used to retrain a new model. Next, we run experiments using our graphbased semi-supervised approach (i.e., shared components followed by the supervised and semisupervised parts in Figure 1), which exploits unlabeled data. For reducing the computational cost, we randomly selected 50K unlabeled instances from the same domain. For our semi-supervised setting, one of the main goals was to understand how much labeled data is sufficient to obtain a 1083 reasonable result. Therefore, we experimented our system by incrementally adding batches of instances, such as 100, 500, 2000, 5000, and all instances from the training set. Such an understanding can help us design the model at the onset of a crisis event with sufficient amount of labeled data. To demonstrate that the semi-supervised approach outperforms the supervised baseline, we run supervised experiments using the same number of labeled instances. In the supervised setting, only zc activations in Figure 1 are used for classification. 3.2.2 Settings for Domain Adaptation To set a baseline for the domain adaptation experiments, we train a CNN model (i.e., shared components followed by the supervised part in Figure 1) on one event (source) and test it on another event (target). We call this as transfer baseline. To assess the performance of our domain adaptation technique alone, we exclude the semisupervised component from the network. We train and evaluate models with this network configuration using different source and target domains. Finally, we integrate all the components of the network as shown in Figure 1 and run domain adaptation experiments using different source and target domains. In all our domain adaptation experiments, we only use unlabeled instances from the target domain. In domain adaption literature, this is known as unsupervised adaptation. 3.2.3 Training Settings We use 100, 150, and 200 filters each having the window size of 2, 3, and 4, respectively, and pooling length of 2, 3, and 4, respectively. We do not tune these hyperparameters in any experimental setting since the goal was to have an end-to-end comparison with the same hyperparameter setting and understand whether our approach can outperform the baselines or not. Furthermore, we do not filter out any vocabulary item in any settings. As mentioned before in Subsection 2.4, we used AdaDelta (Zeiler, 2012) to update the model parameters in each SGD step. The learning rate was set to 0.1 when optimizing on the classification loss and to 0.001 when optimizing on the semisupervised loss. The learning rate for domain adversarial training was set to 1.0. The maximum number of epochs was set to 200, and dropout rate of 0.02 was used to avoid overfitting (Srivastava et al., 2014). We used validation-based early stopping using the F-measure with a patience of 25, Experiments AUC P R F1 NEPAL EARTHQUAKE Supervised 61.22 62.42 62.31 60.89 Semi-supervised (Self-training) 61.15 61.53 61.53 61.26 Semi-supervised (Graph-based) 64.81 64.58 64.63 65.11 QUEENSLAND FLOODS Supervised 80.14 80.08 80.16 80.16 Semi-supervised (Self-training) 81.04 80.78 80.84 81.08 Semi-supervised (Graph-based) 92.20 92.60 94.49 93.54 Table 2: Results using supervised, self-training, and graph-based semi-supervised approaches in terms of Weighted average AUC, precision (P), recall (R) and F-measure (F1). i.e., we stop training if the score does not increase for 25 consecutive epochs. 3.2.4 Evaluation Metrics To measure the performance of the trained models using different approaches described above, we use weighted average precision, recall, F-measure, and Area Under ROC-Curve (AUC), which are standard evaluation measures in the NLP and machine learning communities. The rationale behind choosing the weighted metric is that it takes into account the class imbalance problem. 4 Results and Discussion In this section, we present the experimental results and discuss our main findings. 4.1 Semi-supervised Learning In Table 2, we present the results obtained from the supervised, self-training based semi-supervised, and our graph-based semi-supervised experiments for the both datasets. It can be clearly observed that the graph-based semi-supervised approach outperforms the two baselines – supervised and self-training based semi-supervised. Specifically, the graph-based approach shows 4% to 13% absolute improvements in terms of F1 scores for the Nepal and Queensland datasets, respectively. To determine how the semi-supervised approach performs in the early hours of an event when only fewer labeled instances are available, we mimic a batch-wise (not to be confused with minibatch in SGD) learning setting. In Table 3, we present the results using different batch sizes – 100, 500, 1,000, 2,000, and all labels. From the results, we observe that models’ performance improve as we include more labeled data 1084 Exp. 100 500 1000 2000 All L NEPAL EARTHQUAKE L 43.63 52.89 56.37 60.11 60.89 L+50kU 52.32 59.95 61.89 64.05 65.11 QUEENSLAND FLOOD L 48.97 76.62 80.62 79.16 80.16 L+∼21kU 75.08 85.54 89.08 91.54 93.54 Table 3: Weighted average F-measure for the graph-based semi-supervised settings using different batch sizes. L refers to labeled data, U refers to unlabeled data, All L refers to all labeled instances for that particular dataset. — from 43.63 to 60.89 for NEQ and from 48.97 to 80.16 for QFL in the case of labeled only (L). When we compare supervised vs. semi-supervised (L vs. L+U), we observe significant improvements in F1 scores for the semi-supervised model for all batches over the two datasets. As we include unlabeled instances with labeled instances from the same event, performance significantly improves in each experimental setting giving 5% to 26% absolute improvements over the supervised models. These improvements demonstrate the effectiveness of our approach. We also notice that our semi-supervised approach can perform above 90% depending on the event. Specifically, major improvements are observed from batch size 100 to 1,000, however, after that the performance improvements are comparatively minor. The results obtained using batch sizes 500 and 1,000 are reasonably in the acceptable range when labeled and unlabeled instances are combined (i.e., L+50kU for Nepal and L+∼21kU for Queensland), which is also a reasonable number of training examples to obtain at the onset of an event. 4.2 Domain Adaptation In Table 4, we present domain adaptation results. The first block shows event-specific (i.e., train and test on the same event) results for the supervised CNN model. These results set the upper bound for our domain adaptation methods. The transfer baselines are shown in the next block, where we train a CNN model in one domain and test it on a different domain. Then, the third block shows the results for the domain adversarial approach without the semi-supervised loss. These results show the importance of domain adversarial component. After that, the fourth block presents the performance of the model trained with graph Source Target AUC P R F1 IN-DOMAIN SUPERVISED MODEL Nepal Nepal 61.22 62.42 62.31 60.89 Queensland Queensland 80.14 80.08 80.16 80.16 TRANSFER BASELINES Nepal Queensland 58.99 59.62 60.03 59.10 Queensland Nepal 54.86 56.00 56.21 53.63 DOMAIN ADVERSARIAL Nepal Queensland 60.15 60.62 60.71 60.94 Queensland Nepal 57.63 58.05 58.05 57.79 GRAPH EMBEDDING WITHOUT DOMAIN ADVERSARIAL Nepal Queensland 60.38 60.86 60.22 60.54 Queensland Nepal 54.60 54.58 55.00 54.79 GRAPH EMBEDDING WITH DOMAIN ADVERSARIAL Nepal Queensland 66.49 67.48 65.90 65.92 Queensland Nepal 58.81 58.63 59.00 59.05 Table 4: Domain adaptation experimental results. Weighted average AUC, precision (P), recall (R) and F-measure (F1). embedding without domain adaptation to show the importance of semi-supervised learning. The final block present the results for the complete model that includes all the loss components. The results with domain adversarial training show improvements across both events – from 1.8% to 4.1% absolute gains in F1. These results attest that adversarial training is an effective approach to induce domain invariant features in the internal representation as shown previously by Ganin et al. (2016). Finally, when we do both semi-supervised learning and unsupervised domain adaptation, we get further improvements in F1 scores ranging from 5% to 7% absolute gains. From these improvements, we can conclude that domain adaptation with adversarial training along with graphbased semi-supervised learning is an effective method to leverage unlabeled and labeled data from a different domain. Note that for our domain adaptation methods, we only use unlabeled data from the target domain. Hence, we foresee future improvements of this approach by utilizing a small amount of target domain labeled data. 5 Related Work Two lines of research are directly related to our work: (i) semi-supervised learning and (ii) domain adaptation. Several models have been proposed for semi-supervised learning. The earliest approach is self-training (Scudder, 1965), in 1085 which a trained model is first used to label unlabeled data instances followed by the model retraining with the most confident predicted labeled instances. The co-training (Mitchell, 1999) approach assumes that features can be split into two sets and each subset is then used to train a classifier with an assumption that the two sets are conditionally independent. Then each classifier classifies the unlabeled data, and then most confident data instances are used to re-train the other classifier, this process repeats multiple times. In the graph-based semi-supervised approach, nodes in a graph represent labeled and unlabeled instances and edge weights represent the similarity between them. The structural information encoded in the graph is then used to regularize a model (Zhu, 2005). There are two paradigms in semi-supervised learning: 1) inductive – learning a function with which predictions can be made on unobserved instances, 2) transductive – no explicit function is learned and predictions can only be made on observed instances. As mentioned before, inductive semi-supervised learning is preferable over the transductive approach since it avoids building the graph each time it needs to infer the labels for the unlabeled instances. In our work, we use a graph-based inductive deep learning approach proposed by Yang et al. (2016) to learn features in a deep learning model by predicting contextual (i.e., neighboring) nodes in the graph. However, our approach is different from Yang et al. (2016) in several ways. First, we construct the graph by computing the distance between tweets based on word embeddings. Second, instead of using count-based features, we use a convolutional neural network (CNN) to compose high-level features from the distributed representation of the words in a tweet. Finally, for context prediction, instead of performing a random walk, we select nodes based on their similarity in the graph. Similar similarity-based graph has shown impressive results in learning sentence representations (Saha et al., 2017). In the literature, the proposed approaches for domain adaptation include supervised, semisupervised and unsupervised. It also varies from linear kernelized approach (Blitzer et al., 2006) to non-linear deep neural network techniques (Glorot et al., 2011; Ganin et al., 2016). One direction of research is to focus on feature space distribution matching by reweighting the samples from the source domain (Gong et al., 2013) to map source into target. The overall idea is to learn a good feature representation that is invariant across domains. In the deep learning paradigm, Glorot et al. (Glorot et al., 2011) used Stacked Denoising Auto-Encoders (SDAs) for domain adaptation. SDAs learn a robust feature representation, which is artificially corrupted with small Gaussian noise. Adversarial training of neural networks has shown big impact recently, especially in areas such as computer vision, where generative unsupervised models have proved capable of synthesizing new images (Goodfellow et al., 2014; Radford et al., 2015; Makhzani et al., 2015). Ganin et al. (2016) proposed domain adversarial neural networks (DANN) to learn discriminative but at the same time domain-invariant representations, with domain adaptation as a target. We extend this work by combining with semi-supervised graph embedding for unsupervised domain adaptation. In a recent work, Kipf and Welling (2016) present CNN applied directly on graph-structured datasets - citation networks and on a knowledge graph dataset. Their study demonstrate that graph convolution network for semi-supervised classification performs better compared to other graph based approaches. 6 Conclusions In this paper, we presented a deep learning framework that performs domain adaptation with adversarial training and graph-based semi-supervised learning to leverage labeled and unlabeled data from related events. We use a convolutional neural network to compose high-level representation from the input, which is then passed to three components that perform supervised training, semisupervised learning and domain adversarial training. For domain adaptation, we considered a scenario, where we have only unlabeled data in the target event. Our evaluation on two crisis-related tweet datasets demonstrates that by combining domain adversarial training with semi-supervised learning, our model gives significant improvements over their respective baselines. We have also presented results of batch-wise incremental training of the graph-based semi-supervised approach and show approximation regarding the number of labeled examples required to get an acceptable performance at the onset of an event. 1086 References Mart´ın Arjovsky, Soumith Chintala, and L´eon Bottou. 2017. Wasserstein GAN. CoRR abs/1701.07875. http://arxiv.org/abs/1701.07875. Yoshua Bengio and Xavier Glorot. 2010. Understanding the difficulty of training deep feedforward neural networks. In Proc. of the 13th Intl. Conference on Artificial Intelligence and Statistics. Sardinia, Italy, AISTATS ’10, pages 249–256. Jon Louis Bentley. 1975. Multidimensional binary search trees used for associative searching. Communications of the ACM 18(9):509–517. John Blitzer, Ryan McDonald, and Fernando Pereira. 2006. Domain adaptation with structural correspondence learning. In Proc. of EMNLP. ACL, pages 120–128. Yaroslav Ganin, Evgeniya Ustinova, Hana Ajakan, Pascal Germain, Hugo Larochelle, Franc¸ois Laviolette, Mario Marchand, and Victor Lempitsky. 2016. Domain-adversarial training of neural networks. Journal of MLR 17(59):1–35. Xavier Glorot, Antoine Bordes, and Yoshua Bengio. 2011. Domain adaptation for large-scale sentiment classification: A deep learning approach. In Proceedings of the 28th international conference on machine learning (ICML-11). pages 513–520. Boqing Gong, Kristen Grauman, and Fei Sha. 2013. Connecting the dots with landmarks: Discriminatively learning domain-invariant features for unsupervised domain adaptation. In ICML (1). pages 222–230. Ian Goodfellow, Jean Pouget-Abadie, Mehdi Mirza, Bing Xu, David Warde-Farley, Sherjil Ozair, Aaron Courville, and Yoshua Bengio. 2014. Generative adversarial nets. In Z. Ghahramani, M. Welling, C. Cortes, N. D. Lawrence, and K. Q. Weinberger, editors, Advances in Neural Information Processing Systems 27, Curran Associates, Inc., pages 2672– 2680. Muhammad Imran, Carlos Castillo, Fernando Diaz, and Sarah Vieweg. 2015. Processing social media messages in mass emergency: A survey. ACM Computing Surveys (CSUR) 47(4):67. Muhammad Imran, Carlos Castillo, Ji Lucas, Patrick Meier, and Sarah Vieweg. 2014. AIDR: Artificial intelligence for disaster response. In Proceedings of the 23rd International Conference on World Wide Web. ACM, pages 159–162. Nal Kalchbrenner, Edward Grefenstette, and Phil Blunsom. 2014. A convolutional neural network for modelling sentences. arXiv preprint arXiv:1404.2188 . Thomas N Kipf and Max Welling. 2016. Semisupervised classification with graph convolutional networks. arXiv preprint arXiv:1609.02907 . Qing Lu and Lise Getoor. 2003. Link-based classification. In Proc. of ICML. Alireza Makhzani, Jonathon Shlens, Navdeep Jaitly, and Ian J. Goodfellow. 2015. Adversarial autoencoders. CoRR abs/1511.05644. http://arxiv.org/abs/1511.05644. Luke Metz, Ben Poole, David Pfau, and Jascha Sohl-Dickstein. 2016. Unrolled generative adversarial networks. CoRR abs/1611.02163. http://arxiv.org/abs/1611.02163. Tomas Mikolov, Kai Chen, Greg Corrado, and Jeffrey Dean. 2013. Efficient estimation of word representations in vector space. In Proceedings of the International Conference on Learning Representations. Available as arXiv preprint arXiv:1301.3781. Tom Mitchell. 1999. The role of unlabeled data in supervised learning. In Proceedings of the sixth international colloquium on cognitive science. Citeseer, pages 2–11. Marius Muja and David G Lowe. 2014. Scalable nearest neighbor algorithms for high dimensional data. IEEE Transactions on Pattern Analysis and Machine Intelligence 36(11):2227–2240. Dat Nguyen, Kamela Ali Al Mannai, Shafiq Joty, Hassan Sajjad, Muhammad Imran, and Prasenjit Mitra. 2017. Robust classification of crisis-related data on social networks using convolutional neural networks. In International AAAI Conference on Web and Social Media. Jeffrey Pennington, Richard Socher, and Christopher D. Manning. 2014. Glove: Global vectors for word representation. In Empirical Methods in Natural Language Processing (EMNLP). pages 1532– 1543. http://www.aclweb.org/anthology/D14-1162. Robert Power, Bella Robinson, and David Ratcliffe. 2013. Finding fires with twitter. In Proceedings of the Australasian Language Technology Association Workshop 2013 (ALTA 2013). pages 80–89. Alec Radford, Luke Metz, and Soumith Chintala. 2015. Unsupervised representation learning with deep convolutional generative adversarial networks. CoRR abs/1511.06434. http://arxiv.org/abs/1511.06434. Tanay Saha, Shafiq Joty, Naeemul Hassan, and Mohammad Hasan. 2017. Regularized and retrofitted models for learning sentence representation with context. In CIKM. ACM, Singapore, pages 547– 556. H Scudder. 1965. Probability of error of some adaptive pattern-recognition machines. IEEE Transactions on Information Theory 11(3):363–371. 1087 Nitish Srivastava, Geoffrey E Hinton, Alex Krizhevsky, Ilya Sutskever, and Ruslan Salakhutdinov. 2014. Dropout: a simple way to prevent neural networks from overfitting. Journal of Machine Learning Research 15(1):1929–1958. Michael Steinbach, George Karypis, Vipin Kumar, et al. 2000. A comparison of document clustering techniques. In KDD workshop on text mining. Boston, volume 400, pages 525–526. Istv´an Varga, Motoki Sano, Kentaro Torisawa, Chikara Hashimoto, Kiyonori Ohtake, Takao Kawai, JongHoon Oh, and Stijn De Saeger. 2013. Aid is out there: Looking for help from tweets during a large scale disaster. In Proceedings of the 51st Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers). Association for Computational Linguistics, pages 1619–1629. http://www.aclweb.org/anthology/P13-1159. Sarah Vieweg, Carlos Castillo, and Muhammad Imran. 2014. Integrating social media communications into the rapid assessment of sudden onset disasters. In International Conference on Social Informatics. Springer, pages 444–461. Zhilin Yang, William Cohen, and Ruslan Salakhutdinov. 2016. Revisiting semi-supervised learning with graph embeddings. arXiv preprint arXiv:1603.08861 . Matthew D Zeiler. 2012. Adadelta: an adaptive learning rate method. arXiv preprint arXiv:1212.5701 . Xiaojin Zhu. 2005. Semi-supervised learning literature survey. Technical Report 1530, Computer Sciences, University of Wisconsin-Madison.
2018
99
Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 1–11 Florence, Italy, July 28 - August 2, 2019. c⃝2019 Association for Computational Linguistics 1 One Time of Interaction May Not Be Enough: Go Deep with an Interaction-over-Interaction Network for Response Selection in Dialogues Chongyang Tao1, Wei Wu2, Can Xu2, Wenpeng Hu1, Dongyan Zhao1,3 and Rui Yan1,3∗ 1Institute of Computer Science and Technology, Peking University, Beijing, China 2Microsoft Corporation, Beijing, China 3Center for Data Science, Peking University, Beijing, China 1,3{chongyangtao,wenpeng.hu,zhaody,ruiyan}@pku.edu.cn 2{wuwei,caxu}@microsoft.com Abstract Currently, researchers have paid great attention to retrieval-based dialogues in opendomain. In particular, people study the problem by investigating context-response matching for multi-turn response selection based on publicly recognized benchmark data sets. State-of-the-art methods require a response to interact with each utterance in a context from the beginning, but the interaction is performed in a shallow way. In this work, we let utterance-response interaction go deep by proposing an interaction-over-interaction network (IoI). The model performs matching by stacking multiple interaction blocks in which residual information from one time of interaction initiates the interaction process again. Thus, matching information within an utterance-response pair is extracted from the interaction of the pair in an iterative fashion, and the information flows along the chain of the blocks via representations. Evaluation results on three benchmark data sets indicate that IoI can significantly outperform state-of-theart methods in terms of various matching metrics. Through further analysis, we also unveil how the depth of interaction affects the performance of IoI. 1 Introduction Building a chitchat style dialogue systems in opendomain for human-machine conversations has attracted increasing attention in the conversational artificial intelligence (AI) community. Generally speaking, there are two approaches to implementing such a conversational system. The first approach leverages techniques of information retrieval (Lowe et al., 2015; Wu et al., 2017; Yan and Zhao, 2018), and selects a proper response from an index; while the second approach directly synthesizes a response with a natural lan∗Corresponding author: Rui Yan ([email protected]). guage generation model estimated from a largescale conversation corpus (Serban et al., 2016; Li et al., 2017b). In this work, we study the problem of multi-turn response selection for retrievalbased dialogue systems where the input is a conversation context consisting of a sequence of utterances. Compared with generation-based methods, retrieval-based methods are superior in terms of response fluency and diversity, and thus have been widely applied in commercial chatbots such as the social bot XiaoIce (Shum et al., 2018) from Microsoft, and the e-commerce assistant AliMe Assist from Alibaba Group (Li et al., 2017a). A key step in multi-turn response selection is to measure the matching degree between a conversation context and a response candidate. Stateof-the-art methods (Wu et al., 2017; Zhou et al., 2018b) perform matching within a representationinteraction-aggregation framework (Wu et al., 2018b) where matching signals in each utteranceresponse pair are distilled from their interaction based on their representations, and then are aggregated as a matching score. Although utteranceresponse interaction has proven to be crucial to the performance of the matching models (Wu et al., 2017), it is executed in a rather shallow manner where matching between an utterance and a response candidate is determined only by one step of interaction on each type or each layer of representations. In this paper, we attempt to move from shallow interaction to deep interaction, and consider context-response matching with multiple steps of interaction where residual information from one time of interaction, which is generally ignored by existing methods, is leveraged for additional interactions. The underlying motivation is that if a model extracts some matching information from utterance-response pairs in one step of interaction, then by stacking multiple such steps, the model can gradually accumulate useful signals 2 for matching and finally capture the semantic relationship between a context and a response candidate in a more comprehensive way. We propose an interaction-over-interaction network (IoI) for context-response matching, through which we aim to investigate: (1) how to make interaction go deep in a matching model; and (2) if the depth of interaction really matters in terms of matching performance. A key component in IoI is an interaction block. Taking a pair of utteranceresponse as input, the block first lets the utterance and the response attend to themselves, and then measures interaction of the pair by an attentionbased interaction function. The results of the interaction are concatenated with the self-attention representations and then compressed to new representations of the utterance-response pair as the output of the block. Built on top of the interaction block, IoI initializes each utterance-response pair via pre-trained word embeddings, and then passes the initial representations through a chain of interaction blocks which conduct several rounds of representation-interaction-representation operations and let the utterance and the response interact with each other in an iterative way. Different blocks could distill different levels of matching information in an utterance-response pair. To sufficiently leverage the information, a matching score is first calculated in each block through aggregating matching vectors of all utterance-response pairs, and then the block-wise matching scores are combined as the final matching degree of the context and the response candidate. We conduct experiments on three benchmark data sets: the Ubuntu Dialogue Corpus (Lowe et al., 2015), the Douban Conversation Corpus (Wu et al., 2017), and the E-commerce Dialogue Corpus (Zhang et al., 2018b). Evaluation results indicate that IoI can significantly outperform stateof-the-art methods with 7 interaction blocks over all metrics on all the three benchmarks. Compared with deep attention matching network (DAM), the best performing baseline on all the three data sets, IoI achieves 2.9% absolute improvement on R10@1 on the Ubuntu data, 2.3% absolute improvement on MAP on the Douban data, and 3.7% absolute improvement on R10@1 on the Ecommerce data. Through more quantitative analysis, we also show that depth indeed brings improvement to the performance of IoI, as IoI with 1 interaction block performs worse than DAM on the Douban data and the E-commerce data, and on the Ubuntu data, the gap on R10@1 between IoI and DAM is only 1.1%. Moreover, the improvement brought by depth mainly comes from short contexts. Our contributions in this paper are three-folds: (1) proposal of a novel interaction-over-interaction network which enables deep-level matching with carefully designed interaction block chains; (2) empirical verification of the effectiveness of the model on three benchmarks; and (3) empirical study on the relationship between interaction depth and model performance. 2 Related Work Existing methods for building an open-domain dialogue system can be categorized into two groups. The first group learns response generation models under an encoder-decoder framework. On top of the basic sequence-to-sequence with attention architecture (Vinyals and Le, 2015; Shang et al., 2015; Tao et al., 2018), various extensions have been made to tackle the “safe response” problem (Li et al., 2015; Mou et al., 2016; Xing et al., 2017; Zhao et al., 2017; Song et al., 2018); to generate responses with specific personas or emotions (Li et al., 2016a; Zhang et al., 2018a; Zhou et al., 2018a); and to pursue better optimization strategies (Li et al., 2017b, 2016b). The second group learns a matching model of a human input and a response candidate for response selection. Along this line, the focus of research starts from single-turn response selection by setting the human input as a single message (Wang et al., 2013; Hu et al., 2014; Wang et al., 2015), and moves to context-response matching for multi-turn response selection recently. Representative methods include the dual LSTM model (Lowe et al., 2015), the deep learning to respond architecture (Yan et al., 2016), the multi-view matching model (Zhou et al., 2016), the sequential matching network (Wu et al., 2017, 2018b), and the deep attention matching network (Zhou et al., 2018b). Besides model design, some attention is also paid to the learning problem of matching models (Wu et al., 2018a). Our work belongs to the second group. The proposed interaction-over-interaction network is unique in that it performs matching by stacking multiple interaction blocks, and thus extends the shallow interaction in state-of-the-art methods to a deep 3 GRU ... Utterance-1 Response Utterance-n Initial Representation GRU ... GRU ... GRU ... Interaction Block 1 Interaction Block 2 Interaction Block L GRU GRU g(c,r) : Self-attention : Interaction Operation : Add Operation T11 v11 Tn1 vn1 T12 v12 Tn2 vn2 TnL vnL T1L v1L Figure 1: Architecture of interaction-over-interaction network. form. As far as we know, this is the first architecture that realizes deep interaction for multi-turn response selection. Encouraged by the big success of deep neural architectures such as Resnet (He et al., 2016) and inception (Szegedy et al., 2015) in computer vision, researchers have studied if they can achieve similar results with deep neural networks on NLP tasks. Although deep models have not yet brought breakthroughs to NLP as they do to computer vision, they have proven effective in a few tasks such as text classification (Conneau et al., 2017), natural language inference (Kim et al., 2018; Tay et al., 2018), and question answering (Tay et al., 2018; Kim et al., 2018), etc. In this work, we attempt to improve the accuracy of multi-turn response selection in retrieval-based dialogue systems by increasing the depth of context-response interaction in matching. Through extensive studies on benchmarks, we show that depth can bring significant improvement to model performance on the task. 3 Problem Formalization Suppose that there is a conversation data set D = {(yi, ci, ri)}N i=1. ∀i ∈{1, . . . , N}, ci = {ui,1, . . . , ui,li} represents a conversation context with ui,k the k-th turn, ri is a response candidate, and yi ∈{0, 1} denotes a label with yi = 1 indicating ri a proper response for ci, otherwise yi = 0. The task is to learn a matching model g(·, ·) from D, and thus for a new context-response pair (c, r), g(c, r) measures the matching degree between c and r. In the following sections, we will elaborate how to define g(·, ·) to achieve deep interaction between c and r, and how to learn such a deep model from D. 4 Interaction-over-Interaction Network We define g(·, ·) as an interaction-over-interaction network (IoI). Figure 1 illustrates the architecture of IoI. The model pairs each utterance in a context with a response candidate, and then aggregates matching information from all the pairs as a matching score of the context and the response candidate. For each pair, IoI starts from initial representations of the utterance and the response, and then feeds the pair to stacked interaction blocks. Each block represents the utterance and the response by letting them interact with each other based on the interactions before. Matching signals are first accumulated along the sequence of the utterances in each block, and then combined along the chain of blocks as the final matching score. Below we will describe details of components of IoI and how to learn the model with D. 4.1 Initial Representations Given an utterance u in a context c and a response candidate r, u and r are initialized as Eu = [eu,1, · · · , eu,m] and Er = [er,1, · · · , er,n] respectively. ∀i ∈{1, . . . , m} and ∀j ∈{1, . . . , n}, eu,i and er,j are representations of the i-th word of u and the j-th word of r respectively which 4 are obtained by pre-training Word2vec (Mikolov et al., 2013) on D. Eu and Er are then processed by stacked interaction blocks that model different levels of interaction between u and r and generate matching signals. 4.2 Interaction Block The stacked interaction blocks share the same internal structure. In a nutshell, each block is composed of a self-attention module that captures long-term dependencies within an utterance and a response, an interaction module that models the interaction between the utterance and the response, and a compression module that condenses the results of the first two modules into representations of the utterance and the response as output of the block. The output is then utilized as the input of the next block. Before diving to details of the block, we first generally describe an attention mechanism that lays a foundation for the self-attention module and the interaction module. Let Q ∈Rnq×d and K ∈Rnk×d be a query and a key respectively, where nq and nk denote numbers of words and d is the embedding size, then attention from Q to K is defined as ˆQ = S(Q, K) · K, (1) where S(·, ·) is a function for attention weight calculation. Here, we exploit the symmetric function in (Huang et al., 2017b) as S(·, ·) which is given by: S(Q, K) = softmax(f(QW)Df(KW)⊤). (2) In Equation (2), f is a ReLU activation function, D is a diagonal matrix, and both D ∈Rd×d and W ∈Rd×d are parameters to estimate from training data. Intuitively, in Equation (1), each entry of K is weighted by an importance score defined by the similarity of an entry of Q and an entry of K. The entries of K are then linearly combined with the weights to form a new representation of Q. A residual connection (He et al., 2016) and a layer normalization (Ba et al., 2016) are then applied to ˆQ as ˜Q. After that, ˜Q is fed to a feed forward network which is formulated as ReLU(˜QW1 + b1)W2 + b2, (3) where W{1,2} ∈Rd×d and b{1,2} are parameters. The output of the attention mechanism is defined with the result of Equation (3) after another round of residual connection and layer normalization. For ease of presentation, we denote the entire attention mechanism as fATT (Q, K). Let Uk−1 and Rk−1 be the input of the k-th block where U0 = Eu and R0 = Er, then the self-attention module is defined as ˆUk = fATT(Uk−1, Uk−1), (4) ˆRk = fATT(Rk−1, Rk−1). (5) The interaction module first lets Uk−1 and Rk−1 attend to each other by U k = fATT(Uk−1, Rk−1), (6) R k = fATT(Rk−1, Uk−1). (7) Then Uk−1 and Rk−1 further interact with U k and R k respectively, which can be formulated as ˜Uk = Uk−1 ⊙U k, (8) ˜Rk = Rk−1 ⊙R k, (9) where ⊙denotes element-wise multiplication. Finally, the compression module updates Uk−1 and Rk−1 to Uk and Rk as the output of the block. Suppose that ek u,i and ek r,i are the i-th entries of Uk and Rk respectively, then ek u,i and ek r,i are calculated by ek u,i = ReLU(wp   ek−1 u,i ˆek u,i ek u,i ˜ek u,i  + bp) + ek−1 u,i , (10) ek r,i = ReLU(wp   ek−1 r,i ˆek r,i ek r,i ˜ek r,i  + bp) + ek−1 r,i , (11) where wp ∈R4d×d and bp are learnable projection weights and biases, ˆek {u,r},i, ek {u,r},i, ˜ek {u,r},i, and ek−1 {u,r},i are the i-th entries of { ˆU, ˆR}k, {U, R}k, { ˜U, ˜R}k, and {U, R}k−1, respectively. Inspired by Huang et al. (2017a), we also introduce direct connections from initial representations to all their corresponding subsequent blocks. 4.3 Matching Aggregation Suppose that c = (u1, . . . , ul) is a conversation context with ui the i-th utterance, then in the kth interaction block, we construct three similarity 5 matrices by Mk i,1 = Uk−1 i · (Rk−1)⊤ √ d , Mk i,2 = ˆUk i · ( ˆRk)⊤ √ d , Mk i,3 = U k i · (R k)⊤ √ d , (12) where Uk−1 i and Rk−1 are the input of the k-th block, ˆUk i and ˆRk are defined by Equations (4-5), and U k i and R k are calculated by Equations (6-7). The three matrices are then concatenated into a 3D matching tensor Tk i ∈Rmi×n×3 which can be written as Tk i = Mk i,1 ⊕Mk i,2 ⊕Mk i,3, (13) where ⊕denotes a concatenation operation, and mi and n refer to numbers of words in ui and r respectively. We exploit a convolutional neural network (Krizhevsky et al., 2012) to extract matching features from Tk i . The output of the final feature maps are flattened and mapped to a d-dimensional matching vector vk i with a linear transformation. (vk 1, · · · , vk l ) is then fed to a GRU (Chung et al., 2014) to capture temporal relationship among (u1, . . . , ul). ∀i ∈{1, . . . , l}, the i-th hidden state of the GRU model is given by hk i = GRU(vk i , hk i−1), (14) where hk 0 is randomly initialized. A matching score for context c and response candidate r in the k-th block is defined as gk(c, r) = σ(hk l · wo + bo), (15) where wo and bo are parameters, and σ(·) is a sigmoid function. Finally, g(c, r) is defined by g(c, r) = L X k=1 gk(c, r), (16) where L is the number of interaction blocks in IoI. Note that we define g(c, r) with all blocks rather than only with the last block. This is motivated by (1) only using the last block will make training of IoI difficult due to the gradient vanishing/exploding problem; and (2) different blocks may capture different levels of matching information in (c, r), and thus leveraging all of them could enhance matching accuracy. 5 Learning Methods We consider two strategies to learn an IoI model from the training data D. The first strategy estimates the parameters of IoI (denoted as Θ) by minimizing a global loss function that is formulated as − N X i=1  yi log(g(ci, ri))+(1−yi) log(1−g(ci, ri))  . (17) In the second strategy, we construct a local loss function for each block and minimize the summation of the local loss functions. By this means, each block can be directly supervised by the labels in D during learning. The learning objective is then defined as − L X k=1 N X i=1  yi log(gk(ci, ri)) + (1 −yi) log(1 −gk(ci, ri))  . (18) We compare the two learning strategies through empirical studies, as will be reported in the next section. In both strategies, Θ are optimized using back-propagation with Adam algorithm (Kingma and Ba, 2015). 6 Experiments We test the proposed IoI on three benchmark data sets for multi-turn response selection. 6.1 Experimental Setup The first data we use is the Ubuntu Dialogue Corpus (Lowe et al., 2015) which is a multi-turn English conversation data set constructed from chat logs of the Ubuntu forum. We use the version provided by Xu et al. (2017). The data contains 1 million context-response pairs for training, and 0.5 million pairs for validation and test. In all the three sets, positive responses are human responses, while negative ones are randomly sampled. The ratio of the positive and the negative is 1:1 in the training set, and 1:9 in both the validation set and the test set. Following Lowe et al. (2015), we employ recall at position k in n candidates (Rn@k) as evaluation metrics. The second data set is the Douban Conversation Corpus (Wu et al., 2017) that consists of multiturn Chinese conversations collected from Douban group1. There are 1 million context-response pairs 1https://www.douban.com/group 6 for training, 50 thousand pairs for validation, and 6, 670 pairs for testing. In the training set and the validation set, the last turn of each conversation is taken as a positive response and a negative response is randomly sampled. For each context in the test set, 10 response candidates are retrieved from an index and their appropriateness regarding to the context is annotated by human labelers. Following Wu et al. (2017), we employ Rn@ks, mean average precision (MAP), mean reciprocal rank (MRR) and precision at position 1 (P@1) as evaluation metrics. Finally, we choose the E-commerce Dialogue Corpus (Zhang et al., 2018b) as an experimental data set. The data consists of multi-turn realworld conversations between customers and customer service staff in Taobao2, which is the largest e-commerce platform in China. It contains 1 million context-response pairs for training, and 10 thousand pairs for validation and test. Positive responses in this data are real human responses, and negative candidates are automatically constructed by ranking the response corpus based on conversation history augmented messages using Apache Lucene3. The ratio of the positive and the negative is 1:1 in training and validation, and 1:9 in test. Following (Zhang et al., 2018b), we employ R10@1, R10@2, and R10@5 as evaluation metrics. 6.2 Baselines We compare IoI with the following models: Single-turn Matching Models: these models, including RNN (Lowe et al., 2015), CNN (Lowe et al., 2015), LSTM (Lowe et al., 2015), BiLSTM (Kadlec et al., 2015), MV-LSTM (Wan et al., 2016) and Match-LSTM (Wang and Jiang, 2016), perform context-response matching by concatenating all utterances in a context into a single long document and calculating a matching score between the document and a response candidate. Multi-View (Zhou et al., 2016): the model calculates matching degree between a context and a response candidate from both a word sequence view and an utterance sequence view. DL2R (Yan et al., 2016): the model first reformulates the last utterance with previous turns in a context with different approaches. A response candidate and the reformulated message are then represented by a composition of RNN and CNN. 2https://www.taobao.com 3http://lucene.apache.org/ Finally, a matching score is computed with the concatenation of the representations. SMN (Wu et al., 2017): the model lets each utterance in a context interact with a response candidate at the beginning, and then transforms interaction matrices into a matching vector with CNN. The matching vectors are finally accumulated with an RNN as a matching score. DUA (Zhang et al., 2018b): the model considers the relationship among utterances within a context by exploiting deep utterance aggregation to form a fine-grained context representation. Each refined utterance then matches with a response candidate, and their matching degree is finally calculated through an aggregation on turns. DAM (Zhou et al., 2018b): the model lets each utterance in a context interact with a response candidate at different levels of representations obtained by a stacked self-attention module and a cross-attention module. For the Ubuntu data and the Douban data, since results of all baselines under fine-tuning are available in Zhou et al. (2018b), we directly copy the numbers from the paper. For the E-commerce data, Zhang et al. (2018b) report performance of all baselines except DAM. Thus, we copy all available numbers from the paper and implement DAM with the published code4. In order to conduct statistical tests, we also run the code of DAM on the Ubuntu data and the Douban data. 6.3 Implementation Details In IoI, we set the size of word embedding as 200. For the CNN in matching aggregation, we set the window size of convolution and pooling kernels as (3, 3), and the strides as (1, 1) and (3, 3) respectively. The number of convolution kernels is 32 in the first layer and 16 in the second layer. The dimension of the hidden states of GRU is set as 200. Following Wu et al. (2017), we limit the length of a context to 10 turns and the length of an utterance (either from a context or from a response candidate) to 50 words. Truncation or zero-padding is applied to a context or a response candidate when necessary. We gradually increase the number of interaction blocks (i.e., L) in IoI, and finally set L = 7 in comparison with the baseline models. In optimization, we choose 0.2 as a dropout rate, and 50 as the size of mini-batches. The learning rate is initialized as 0.0005, and exponentially decayed 4 https://github.com/baidu/Dialogue 7 Models Metrics Ubuntu Corpus Douban Corpus R2@1 R10@1 R10@2 R10@5 MAP MRR P@1 R10@1 R10@2 R10@5 RNN (Lowe et al., 2015) 0.768 0.403 0.547 0.819 0.390 0.422 0.208 0.118 0.223 0.589 CNN (Lowe et al., 2015) 0.848 0.549 0.684 0.896 0.417 0.440 0.226 0.121 0.252 0.647 LSTM (Lowe et al., 2015) 0.901 0.638 0.784 0.949 0.485 0.527 0.320 0.187 0.343 0.720 BiLSTM (Kadlec et al., 2015) 0.895 0.630 0.780 0.944 0.479 0.514 0.313 0.184 0.330 0.716 DL2R (Yan et al., 2016) 0.899 0.626 0.783 0.944 0.488 0.527 0.330 0.193 0.342 0.705 MV-LSTM (Wan et al., 2016) 0.906 0.653 0.804 0.946 0.498 0.538 0.348 0.202 0.351 0.710 Match-LSTM (Wang and Jiang, 2016) 0.904 0.653 0.799 0.944 0.500 0.537 0.345 0.202 0.348 0.720 Multi-View (Zhou et al., 2016) 0.908 0.662 0.801 0.951 0.505 0.543 0.342 0.202 0.350 0.729 SMN (Wu et al., 2017) 0.926 0.726 0.847 0.961 0.529 0.569 0.397 0.233 0.396 0.724 DUA(Zhang et al., 2018b) 0.752 0.868 0.962 0.551 0.599 0.421 0.243 0.421 0.780 DAM (Zhou et al., 2018b) 0.938 0.767 0.874 0.969 0.550 0.601 0.427 0.254 0.410 0.757 IoI-global 0.941 0.778 0.879 0.970 0.566 0.608 0.433 0.263 0.436 0.781 IoI-local 0.947 0.796 0.894 0.974 0.573 0.621 0.444 0.269 0.451 0.786 Table 1: Evaluation results on the Ubuntu data and the Douban data. Numbers in bold mean that the improvement to the best performing baseline is statistically significant (t-test with p-value < 0.05). Models Metrics R10@1 R10@2 R10@5 RNN (Lowe et al., 2015) 0.325 0.463 0.775 CNN (Lowe et al., 2015) 0.328 0.515 0.792 LSTM (Lowe et al., 2015) 0.365 0.536 0.828 BiLSTM (Kadlec et al., 2015) 0.355 0.525 0.825 DL2R (Yan et al., 2016) 0.399 0.571 0.842 MV-LSTM (Wan et al., 2016) 0.412 0.591 0.857 Match-LSTM (Wang and Jiang, 2016) 0.410 0.590 0.858 Multi-View (Zhou et al., 2016) 0.421 0.601 0.861 SMN (Wu et al., 2017) 0.453 0.654 0.886 DUA(Zhang et al., 2018b) 0.501 0.700 0.921 DAM (Zhou et al., 2018b) 0.526 0.727 0.933 IoI-global 0.554 0.747 0.942 IoI-local 0.563 0.768 0.950 Table 2: Evaluation results on the E-commerce data. Numbers in bold mean that the improvement to the best performing baseline is statistically significant (ttest with p-value < 0.05). during training. 6.4 Evaluation Results Table 1 and Table 2 report evaluation results on the three data sets where IoI-global and IoI-local represent models learned with Objective (17) and Objective (18) respectively. We can see that both IoIlocal and IoI-global outperform the best performing baseline, and improvements from IoI-local on all metrics and from IoI-global on a few metrics are statistically significant (t-test with p-value < 0.05). IoI-local is consistently better than IoIglobal over all metrics on all the three data sets, demonstrating that directly supervising each block in learning can lead to a more optimal deep structure than optimizing the final matching model. 6.5 Discussions In this section, we make some further analysis with IoI-local to understand (1) how depth of in0.78 0.79 0.80 R10@1 0.778 0.789 0.793 0.794 0.795 0.795 0.796 0.794 Ubuntu E-Commerce Douban 0.45 0.50 0.55 R10@1 0.467 0.516 0.528 0.537 0.554 0.563 0.563 0.561 1 2 3 4 5 6 7 8 # Interaction Blocks 0.40 0.42 0.44 P@1 0.402 0.421 0.430 0.432 0.441 0.440 0.444 0.441 Figure 2: Performance of IoI under different numbers of the interaction blocks. teraction affects the performance of IoI; (2) how context length affects the performance of IoI; and (3) importance of different components of IoI with respect to matching accuracy. Impact of interaction depth. Figure 2 illustrates how the performance of IoI changes with respect to the number of interaction blocks on test sets of the three data. From the chart, we observe a consistent trend over the three data sets: there is significant improvement during the first few blocks, and then the performance of the model becomes stable. The results indicate that depth of interaction indeed matters in terms of matching accuracy. With shallow interaction (L = 1), IoI performs worse than DAM on the Douban data and the E-commerce data. Only after the interaction goes deep (L ≥5), improvement from IoI 8 Models Metrics Ubuntu data Douban data E-commerce data R2@1 R10@1 R10@2 MAP MRR P@1 R10@1 R10@2 R10@5 IoI 0.947 0.796 0.894 0.573 0.621 0.444 0.563 0.768 0.947 IoI-E 0.947 0.794 0.891 0.568 0.616 0.438 0.559 0.762 0.943 IoI- ˆE 0.946 0.790 0.888 0.565 0.613 0.433 0.557 0.749 0.941 IoI-E 0.947 0.793 0.890 0.566 0.613 0.439 0.560 0.754 0.943 IoI- ˜E 0.947 0.795 0.891 0.571 0.616 0.441 0.562 0.740 0.944 IoI-M1 0.946 0.793 0.890 0.568 0.611 0.436 0.557 0.743 0.943 IoI-M2 0.944 0.788 0.886 0.562 0.605 0.427 0.551 0.739 0.942 IoI-M3 0.946 0.793 0.889 0.567 0.615 0.438 0.558 0.748 0.946 Table 3: Evaluation results of the ablation study on the three data sets. (0, 10] (10, 20] (20, 30] (30, 50] Average utterance length (words) 0.725 0.750 0.775 0.800 0.825 0.850 R10@1 DAM IoI-1L IoI-7L (a) R10@1 vs. Average utterance length [2, 4] [5, 7] [8, 10] Context length (turns) 0.75 0.76 0.77 0.78 0.79 0.80 0.81 R10@1 DAM IoI-1L IoI-7L (b) R10@1 vs. Number of turns Figure 3: Performance of IoI across contexts with different lengths on the Ubuntu data. to DAM on the two data becomes significant. On the Ubuntu data, improvement to DAM from the deep model (L = 7) is more than twice as much as that from the shallow model (L = 1). The performance of IoI becomes stable earlier on the Ubuntu data than it does on the other two data. This may stem from the different nature of test sets of the three data. The test set of the Ubuntu data is in large size and built by random sampling, while the test sets of the other two data are smaller and constructed through response retrieval. Impact of context length. Context length is measured by (1) number of turns in a context and (2) average length of utterances in a context. Figure 3 shows how the performance of IoI varies across contexts with different lengths, where we bin test examples of the Ubuntu data into buckets and compare IoI (L = 7) with its shallow version (L = 1) and DAM. We find that (1) IoI, either in a deep form or in a shallow form, is good at dealing with contexts with long utterances, as the model achieves better performance on longer utterances; (2) overall, IoI performs well on contexts with more turns, although too many turns (e.g., ≥8) is still challenging; (3) a deep form of our model is always better than its shallow form, no matter how we measure context length, and the gap between the two forms is bigger on short contexts than it is on long contexts, indicating that depth mainly improves matching accuracy on short contexts; and (4) trends of DAM in both charts are consistent with those reported in (Zhou et al., 2018b), and on both short contexts and long contexts, IoI is superior to DAM. Ablation study. Finally, we examine how different components of IoI affects its performance. First, we remove ek−1 u,i (ek−1 r,i ), ˆek u,i (ˆek r,i), ek u,i (ek r,i), and ˜ek u,i (˜ek r,i) one by one from Equation (10) and Equation (11), and denote the models as IoI-E, IoI- ˆE, IoI-E, and IoI- ˜E respectively. Then, we keep all representations in Equation (10) and Equation (11), and remove Mk i,1, Mk i,2, and Mk i,3 one by one from Equation (13). The models are named IoI-M1, IoI-M2, and IoI-M3 respectively. Table 3 reports the ablation results5. We conclude that (1) all representations are useful in representing the information flow along the chain of interaction blocks and capturing the matching information between an utterance-response pair within the blocks, as removing any component gener5Due to space limitation, we only report results on main metrics. 9 ally causes performance drop on all the three data sets; and (2) in terms of component importance, ˆE > E > E > ˜E and M2 > M1 ≈M3, meaning that self-attention (i.e., ˆE) and cross-attention (i.e., E) are more important than others in information flow representation, and self-attention (i.e., those used for calculating M2) convey more matching signals. Note that these results are obtained with IoI (L = 7). We also check the ablation results of IoI (L = 1) and do not see much difference on overall trends and relative gaps among different ablated models. 7 Conclusions and Future Work We present an interaction-over-interaction network (IoI) that lets utterance-response interaction in context-response matching go deep. Depth of the model comes from stacking multiple interaction blocks that execute representationinteraction-representation in an iterative manner. Evaluation results on three benchmarks indicate that IoI can significantly outperform baseline methods with moderate depth. In the future, we plan to integrate our IoI model with models like ELMo (Peters et al., 2018) and BERT (Devlin et al., 2018) to study if the performance of IoI can be further improved. Acknowledgement We would like to thank the anonymous reviewers for their constructive comments. This work was supported by the National Key Research and Development Program of China (No. 2017YFC0804001), the National Science Foundation of China (NSFC Nos. 61672058 and 61876196). References Jimmy Lei Ba, Jamie Ryan Kiros, and Geoffrey E Hinton. 2016. Layer normalization. arXiv preprint arXiv:1607.06450. Junyoung Chung, Caglar Gulcehre, KyungHyun Cho, and Yoshua Bengio. 2014. Empirical evaluation of gated recurrent neural networks on sequence modeling. arXiv preprint arXiv:1412.3555. Alexis Conneau, Holger Schwenk, Lo¨ıc Barrault, and Yann Lecun. 2017. Very deep convolutional networks for text classification. In Proceedings of the 15th Conference of the European Chapter of the Association for Computational Linguistics: Volume 1, Long Papers, pages 1107–1116. Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2018. BERT: Pre-training of deep bidirectional transformers for language understanding. arXiv preprint arXiv:1810.04805. Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. 2016. Deep residual learning for image recognition. In Proceedings of the IEEE conference on computer vision and pattern recognition, pages 770– 778. Baotian Hu, Zhengdong Lu, Hang Li, and Qingcai Chen. 2014. Convolutional neural network architectures for matching natural language sentences. In Advances in Neural Information Processing Systems, pages 2042–2050. Gao Huang, Zhuang Liu, Laurens Van Der Maaten, and Kilian Q Weinberger. 2017a. Densely connected convolutional networks. In Proceedings of the IEEE conference on computer vision and pattern recognition, pages 4700–4708. Hsin-Yuan Huang, Chenguang Zhu, Yelong Shen, and Weizhu Chen. 2017b. FusionNet: Fusing via fullyaware attention with application to machine comprehension. In International Conference on Learning Representations. Rudolf Kadlec, Martin Schmid, and Jan Kleindienst. 2015. Improved deep learning baselines for ubuntu corpus dialogs. arXiv preprint arXiv:1510.03753. Seonhoon Kim, Jin-Hyuk Hong, Inho Kang, and Nojun Kwak. 2018. Semantic sentence matching with densely-connected recurrent and co-attentive information. arXiv preprint arXiv:1805.11360. Diederik P Kingma and Jimmy Ba. 2015. Adam: A method for stochastic optimization. In International Conference on Learning Representations. Alex Krizhevsky, Ilya Sutskever, and Geoffrey E Hinton. 2012. Imagenet classification with deep convolutional neural networks. In Advances in neural information processing systems, pages 1097–1105. Feng-Lin Li, Minghui Qiu, Haiqing Chen, Xiongwei Wang, Xing Gao, Jun Huang, Juwei Ren, Zhongzhou Zhao, Weipeng Zhao, Lei Wang, et al. 2017a. AliMe assist: An intelligent assistant for creating an innovative e-commerce experience. In Proceedings of the 2017 ACM on Conference on Information and Knowledge Management, pages 2495– 2498. Jiwei Li, Michel Galley, Chris Brockett, Jianfeng Gao, and Bill Dolan. 2015. A diversity-promoting objective function for neural conversation models. Proceedings of the 2016 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 110–119. 10 Jiwei Li, Michel Galley, Chris Brockett, Georgios Spithourakis, Jianfeng Gao, and Bill Dolan. 2016a. A persona-based neural conversation model. In Association for Computational Linguistics, pages 994– 1003. Jiwei Li, Will Monroe, Alan Ritter, Dan Jurafsky, Michel Galley, and Jianfeng Gao. 2016b. Deep reinforcement learning for dialogue generation. In Proceedings of the 2016 Conference on Empirical Methods in Natural Language Processing, pages 1192– 1202. Jiwei Li, Will Monroe, Tianlin Shi, S˙ebastien Jean, Alan Ritter, and Dan Jurafsky. 2017b. Adversarial learning for neural dialogue generation. In Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing, pages 2157–2169. Ryan Lowe, Nissan Pow, Iulian Serban, and Joelle Pineau. 2015. The ubuntu dialogue corpus: A large dataset for research in unstructured multi-turn dialogue systems. In Proceedings of the 16th Annual Meeting of the Special Interest Group on Discourse and Dialogue, pages 285–294. Tomas Mikolov, Ilya Sutskever, Kai Chen, Greg S Corrado, and Jeff Dean. 2013. Distributed representations of words and phrases and their compositionality. In Advances in neural information processing systems, pages 3111–3119. Lili Mou, Yiping Song, Rui Yan, Ge Li, Lu Zhang, and Zhi Jin. 2016. Sequence to backward and forward sequences: A content-introducing approach to generative short-text conversation. In Proceedings of COLING 2016, the 26th International Conference on Computational Linguistics: Technical Papers, pages 3349–3358. Matthew E Peters, Mark Neumann, Mohit Iyyer, Matt Gardner, Christopher Clark, Kenton Lee, and Luke Zettlemoyer. 2018. Deep contextualized word representations. In Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long Papers), pages 2227–2237. Iulian Vlad Serban, Alessandro Sordoni, Yoshua Bengio, Aaron C. Courville, and Joelle Pineau. 2016. End-to-end dialogue systems using generative hierarchical neural network models. In AAAI, pages 3776–3784. Lifeng Shang, Zhengdong Lu, and Hang Li. 2015. Neural responding machine for short-text conversation. In Proceedings of the 53rd Annual Meeting of the Association for Computational Linguistics and the 7th International Joint Conference on Natural Language Processing, pages 1577–1586. Heung-Yeung Shum, Xiaodong He, and Di Li. 2018. From Eliza to XiaoIce: Challenges and opportunities with social chatbots. Frontiers of IT & EE, 19(1):10–26. Yiping Song, Rui Yan, Cheng-Te Li, Jian-Yun Nie, Ming Zhang, and Dongyan Zhao. 2018. An ensemble of retrieval-based and generation-based humancomputer conversation systems. In IJCAI, pages 4382–4388. Christian Szegedy, Wei Liu, Yangqing Jia, Pierre Sermanet, Scott Reed, Dragomir Anguelov, Dumitru Erhan, Vincent Vanhoucke, and Andrew Rabinovich. 2015. Going deeper with convolutions. In Proceedings of the IEEE conference on computer vision and pattern recognition, pages 1–9. Chongyang Tao, Shen Gao, Mingyue Shang, Wei Wu, Dongyan Zhao, and Rui Yan. 2018. Get the point of my utterance! learning towards effective responses with multi-head attention mechanism. In IJCAI, pages 4418–4424. Yi Tay, Luu Anh Tuan, and Siu Cheung Hui. 2018. Co-stack residual affinity networks with multi-level attention refinement for matching text sequences. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, pages 4492–4502. Oriol Vinyals and Quoc Le. 2015. A neural conversational model. arXiv preprint arXiv:1506.05869. Shengxian Wan, Yanyan Lan, Jun Xu, Jiafeng Guo, Liang Pang, and Xueqi Cheng. 2016. Match-srnn: Modeling the recursive matching structure with spatial rnn. In IJCAI, pages 2922–2928. Hao Wang, Zhengdong Lu, Hang Li, and Enhong Chen. 2013. A dataset for research on short-text conversations. In Proceedings of the 2013 Conference on Empirical Methods in Natural Language Processing, pages 935–945. Mingxuan Wang, Zhengdong Lu, Hang Li, and Qun Liu. 2015. Syntax-based deep matching of short texts. In Proceedings of the Twenty-Fourth International Joint Conference on Artificial Intelligence, pages 1354–1361. Shuohang Wang and Jing Jiang. 2016. Learning natural language inference with LSTM. In Proceedings of the 2016 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 1442–1451. Yu Wu, Wei Wu, Zhoujun Li, and Ming Zhou. 2018a. Learning matching models with weak supervision for response selection in retrieval-based chatbots. In Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Volume 2: Short Papers), pages 420–425. Yu Wu, Wei Wu, Chen Xing, Can Xu, Zhoujun Li, and Ming Zhou. 2018b. A sequential matching framework for multi-turn response selection in retrieval-based chatbots. Computational Linguistics, 45(1):163–197. 11 Yu Wu, Wei Wu, Chen Xing, Ming Zhou, and Zhoujun Li. 2017. Sequential matching network: A new architecture for multi-turn response selection in retrieval-based chatbots. In Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics, volume 1, pages 496–505. Chen Xing, Wei Wu, Yu Wu, Jie Liu, Yalou Huang, Ming Zhou, and Wei-Ying Ma. 2017. Topic aware neural response generation. In AAAI, pages 3351– 3357. Zhen Xu, Bingquan Liu, Baoxun Wang, Chengjie Sun, and Xiaolong Wang. 2017. Incorporating loosestructured knowledge into LSTM with recall gate for conversation modeling. In Proceedings of the 2017 International Joint Conference on Neural Networks, pages 3506–3513. Rui Yan, Yiping Song, and Hua Wu. 2016. Learning to respond with deep neural networks for retrievalbased human-computer conversation system. In SIGIR, pages 55–64. Rui Yan and Dongyan Zhao. 2018. Coupled context modeling for deep chit-chat: towards conversations between human and computer. In Proceedings of the 24th ACM SIGKDD International Conference on Knowledge Discovery & Data Mining, pages 2574– 2583. ACM. Saizheng Zhang, Emily Dinan, Jack Urbanek, Arthur Szlam, Douwe Kiela, and Jason Weston. 2018a. Personalizing dialogue agents: I have a dog, do you have pets too? In Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 2204– 2213. Zhuosheng Zhang, Jiangtong Li, Pengfei Zhu, Hai Zhao, and Gongshen Liu. 2018b. Modeling multiturn conversation with deep utterance aggregation. In Proceedings of the 27th International Conference on Computational Linguistics, pages 3740–3752. Association for Computational Linguistics. Tiancheng Zhao, Ran Zhao, and Maxine Eskenazi. 2017. Learning discourse-level diversity for neural dialog models using conditional variational autoencoders. In Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), volume 1, pages 654–664. Hao Zhou, Minlie Huang, Tianyang Zhang, Xiaoyan Zhu, and Bing Liu. 2018a. Emotional chatting machine: Emotional conversation generation with internal and external memory. In The Thirty-Second AAAI Conference on Artificial Intelligence, pages 730–738. Xiangyang Zhou, Daxiang Dong, Hua Wu, Shiqi Zhao, Dianhai Yu, Hao Tian, Xuan Liu, and Rui Yan. 2016. Multi-view response selection for human-computer conversation. In Proceedings of the 2016 Conference on Empirical Methods in Natural Language Processing, pages 372–381. Xiangyang Zhou, Lu Li, Daxiang Dong, Yi Liu, Ying Chen, Wayne Xin Zhao, Dianhai Yu, and Hua Wu. 2018b. Multi-turn response selection for chatbots with deep attention matching network. In Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), volume 1, pages 1118–1127.
2019
1
Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 95–106 Florence, Italy, July 28 - August 2, 2019. c⃝2019 Association for Computational Linguistics 95 Generating Logical Forms from Graph Representations of Text and Entities Peter Shaw1, Philip Massey1, Angelica Chen1, Francesco Piccinno2, Yasemin Altun2 1Google 2Google Research {petershaw,pmassey,angelicachen,piccinno,altun}@google.com Abstract Structured information about entities is critical for many semantic parsing tasks. We present an approach that uses a Graph Neural Network (GNN) architecture to incorporate information about relevant entities and their relations during parsing. Combined with a decoder copy mechanism, this approach provides a conceptually simple mechanism to generate logical forms with entities. We demonstrate that this approach is competitive with the stateof-the-art across several tasks without pretraining, and outperforms existing approaches when combined with BERT pre-training. 1 Introduction Semantic parsing maps natural language utterances into structured meaning representations. The representation languages vary between tasks, but typically provide a precise, machine interpretable logical form suitable for applications such as question answering (Zelle and Mooney, 1996; Zettlemoyer and Collins, 2007; Liang et al., 2013; Berant et al., 2013). The logical forms typically consist of two types of symbols: a vocabulary of operators and domain-specific predicates or functions, and entities grounded to some knowledge base or domain. Recent approaches to semantic parsing have cast it as a sequence-to-sequence task (Dong and Lapata, 2016; Jia and Liang, 2016; Ling et al., 2016), employing methods similar to those developed for neural machine translation (Bahdanau et al., 2014), with strong results. However, special consideration is typically given to handling of entities. This is important to improve generalization and computational efficiency, as most tasks require handling entities unseen during training, and the set of unique entities can be large. Some recent approaches have replaced surface forms of entities in the utterance with placeholders (Dong and Lapata, 2016). This requires a preprocessing step to completely disambiguate entities and replace their spans in the utterance. Additionally, for some tasks it may be beneficial to leverage relations between entities, multiple entity candidates per span, or entity candidates without a corresponding span in the utterance, while generating logical forms. Other approaches identify only types and surface forms of entities while constructing the logical form (Jia and Liang, 2016), using a separate post-processing step to generate the final logical form with grounded entities. This ignores potentially useful knowledge about relevant entities. Meanwhile, there has been considerable recent interest in Graph Neural Networks (GNNs) (Scarselli et al., 2009; Li et al., 2016; Kipf and Welling, 2017; Gilmer et al., 2017; Veliˇckovi´c et al., 2018) for effectively learning representations for graph structures. We propose a GNN architecture based on extending the self-attention mechanism of the Transformer (Vaswani et al., 2017) to make use of relations between input elements. We present an application of this GNN architecture to semantic parsing, conditioning on a graph representation of the given natural language utterance and potentially relevant entities. This approach is capable of handling ambiguous and potentially conflicting entity candidates jointly with a natural language utterance, relaxing the need for completely disambiguating a set of linked entities before parsing. This graph formulation also enables us to incorporate knowledge about the relations between entities where available. Combined with a copy mechanism while decoding, this approach also provides a conceptually simple method for generating logical forms with grounded entities. We demonstrate the capability of the pro96 Dataset Example GEO x : which states does the mississippi run through ? y : answer ( state ( traverse 1( riverid ( mississippi ) ) ) ) ATIS x : in denver what kind of ground transportation is there from the airport to downtown y : ( _lambda $0 e ( _and ( _ground_transport $0 ) ( _to_city $0 denver : ci ) ( _from_airport $0 den : ap ) ) ) SPIDER x : how many games has each stadium held ? y : SELECT T1 . id , count ( ∗) FROM stadium AS T1 JOIN game AS T2 ON T1 . id = T2 . stadium id GROUP BY T1 . id Table 1: Example input utterances, x, and meaning representations, y, with entities underlined. posed architecture by achieving competitive results across 3 semantic parsing tasks. Further improvements are possible by incorporating a pretrained BERT (Devlin et al., 2018) encoder within the architecture. 2 Task Formulation Our goal is to learn a model for semantic parsing from pairs of natural language utterances and structured meaning representations. Let the natural language utterance be represented as a sequence x = (x1, . . . , x|x|) of |x| tokens, and the meaning representation be represented as a sequence y = (y1, . . . , y|y|) of |y| elements. The goal is to estimate p(y | x), the conditional probability of the meaning representation y given utterance x, which is augmented by a set of potentially relevant entities. Input Utterance Each token xi ∈Vin is from a vocabulary of input tokens. Entity Candidates Given the input utterance x, we retrieve a set, e = {e1, . . . , e|e|}, of potentially relevant entity candidates, with e ⊆Ve, where Ve is in the set of all entities for a given domain. We assume the availability of an entity candidate generator for each task to generate e given x, with details given in § 5.2. For each entity candidate, e ∈Ve, we require a set of task-specific attributes containing one or more elements from Va. These attributes can be NER types or other characteristics of the entity, such as “city” or “river” for some of the entities listed in Table 1. Whereas Ve can be quite large for open domains, or even infinite if it includes sets such as the natural numbers, Va is typically much smaller. Therefore, we can effectively learn representations for entities given their set of attributes, from our set of example pairs. Edge Labels In addition to x and e for a particular example, we also consider the (|x|+|e|)2 pairwise relations between all tokens and entity candidates, represented as edge labels. The edge label between tokens xi and xj corresponds to the relative sequential position, j −i, of the tokens, clipped to within some range. The edge label between token xi and entity ej, and vice versa, corresponds to whether xi is within the span of the entity candidate ej, or not. The edge label between entities ei and ej captures the relationship between the entities. These edge labels can have domain-specific interpretations, such as relations in a knowledge base, or any other type of entity interaction features. For tasks where this information is not available or useful, a single generic label between entity candidates can be used. Output We consider the logical form, y, to be a linear sequence (Vinyals et al., 2015b). We tokenize based on the syntax of each domain. Our formulation allows each element of y to be either an element of the output vocabulary, Vout, or an entity copied from the set of entity candidates e. Therefore, yi ∈Vout ∪Ve. Some experiments in §5.2 also allow elements of y to be tokens ∈Vin from x that are copied from the input. 3 Model Architecture Our model architecture is based on the Transformer (Vaswani et al., 2017), with the selfattention sub-layer extended to incorporate relations between input elements, and the decoder extended with a copy mechanism. 97 x1 x2 x8 e4 e1 e2 how stadium /table ... games has each stadium held ? many game /table ... x3 x4 x5 x6 x7 stadium_id /column ... … e3 id /column /primary_key ... Figure 1: We use an example from SPIDER to illustrate the model inputs: tokens from the given utterance, x, a set of potentially relevant entities, e, and their relations. We selected two edge label types to highlight: edges denoting that an entity spans a token, and edges between entities that, for SPIDER, indicate a foreign key relationship between columns, or an ownership relationship between columns and tables. 3.1 GNN Sub-layer We extend the Transformer’s self-attention mechanism to form a Graph Neural Network (GNN) sublayer that incorporates a fully connected, directed graph with edge labels. The sub-layer maps an ordered sequence of node representations, u = (u1, . . . , u|u|), to a new sequence of node representations, u′ = (u′ 1, . . . , u′ |u|), where each node is represented ∈ Rd. We use rij to denote the edge label corresponding to ui and uj. We implement this sub-layer in terms of a function f(m, l) over a node representation m ∈Rd and an edge label l that computes a vector representation in Rd′. We use nheads parallel attention heads, with d′ = d/nheads. For each head k, the new representation for the node ui is computed by uk′ i = |u| X j=1 αijf(uj, rij), (1) where each coefficient αij is a softmax over the scaled dot products sij, sij = (Wqui)⊺f(uj, rij) √ d′ , (2) and Wq is a learned matrix. Finally, we concatenate representations from each head, u′ i = Wh  u1′ i | · · · | unheads′ i  , (3) where Wh is another learned matrix and [ · · · ] denotes concatenation. If we implement f as, f(m, l) = Wrm, (4) where Wr ∈Rd′×d is a learned matrix, then the sub-layer would be effectively identical to self-attention as initially proposed in the Transformer (Vaswani et al., 2017). We focus on two alternative formulations of f that represent edge labels as learned matrices and learned vectors. Edge Matrices The first formulation represents edge labels as linear transformations, a common parameterization for GNNs (Li et al., 2016), f(m, l) = Wlm, (5) where Wl ∈Rd′×d is a learned embedding matrix per edge label. Edge Vectors The second formulation represents edge labels as additive vectors using the same formulation as Shaw et al. (2018), f(m, l) = Wrm + wl, (6) where Wr ∈Rd′×d is a learned matrix shared by all edge labels, and wl ∈Rd is a learned embedding vector per edge label l. 3.2 Encoder Input Representations Before the initial encoder layer, tokens are mapped to initial representations using either a learned embedding table for Vin, or the output of a pre-trained BERT (Devlin et al., 2018) encoder. Entity candidates are mapped to initial representations using the mean of the embeddings for each of the entity’s attributes, based on a learned embedding table for 98 Previous Outputs Nenc Layers Tokens Entities Embed GNN Sub-layer Feed-Forward Network Ndec Layers GNN Sub-layer Embed Encoder-Decoder Attention Select Action Generate or Copy Output Feed-Forward Network Relations Embed / BERT Embed Figure 2: Our model architecture is based on the Transformer (Vaswani et al., 2017), with two modifications. First, the self-attention sub-layer has been extended to be a GNN that incorporates edge representations. In the encoder, the GNN sub-layer is conditioned on tokens, entities, and their relations. Second, the decoder has been extended to include a copy mechanism (Vinyals et al., 2015a). We can optionally incorporate a pre-trained model such as BERT to generate contextual token representations. Va. We also concatenate an embedding representing the node type, token or entity, to each input representation. We assume some arbitrary ordering for entity candidates, generating a combined sequence of initial node representations for tokens and entities. We have edge labels between every pair of nodes as described in § 2. Encoder Layers Our encoder layers are essentially identical to the Transformer, except with the proposed extension to self-attention to incorporate edge labels. Therefore, each encoder layer consists of two sub-layers. The first is the GNN sub-layer, which yields new sets of token and entity representations. The second sub-layer is an element-wise feed-forward network. Each sublayer is followed by a residual connection and layer normalization (Ba et al., 2016). We stack Nenc encoder layers, yielding a final set of token representations, wx(Nenc), and entity representations, we(Nenc). 3.3 Decoder The decoder auto-regressively generates output symbols, y1, . . . , y|y|. It is similarly based on the Transformer (Vaswani et al., 2017), with the self-attention sub-layer replaced by the GNN sublayer. Decoder edge labels are based only on the relative timesteps of the previous outputs. The encoder-decoder attention layer considers both encoder outputs wx(Nenc) and we(Nenc), jointly normalizing attention weights over tokens and entity candidates. We stack Ndec decoder layers to produce an output vector representation at each output step, zj ∈Rdz, for j ∈{1, . . . , |y|}. We allow the decoder to copy tokens or entity candidates from the input, effectively combining a Pointer Network (Vinyals et al., 2015a) with a standard softmax output layer for selecting symbols from an output vocabulary (Gu et al., 2016; Gulcehre et al., 2016; Jia and Liang, 2016). We define a latent action at each output step, aj for j ∈{1, . . . , |y|}, using similar notation as Jia et al. (2016). We normalize action probabilities with a softmax over all possible actions. Generating Symbols We can generate a symbol, denoted Generate[i], P(aj =Generate[i] | x, y1:j−1) ∝ exp(z⊺ j wout i ), (7) where wout i is a learned embedding vector for the element ∈ Vout with index i. If aj = Generate[i], then yj will be the element ∈Vout with index i. Copying Entities We can also copy an entity candidate, denoted CopyEntity[i], P(aj = CopyEntity[i] | x, y1:j−1) ∝ exp((zjWe)⊺w(Nenc) ei ), (8) where We is a learned matrix, and i ∈ {1, . . . , |e|}. If aj = CopyEntity[i], then yj = ei. 99 4 Related Work Various approaches to learning semantic parsers from pairs of utterances and logical forms have been developed over the years (Tang and Mooney, 2000; Zettlemoyer and Collins, 2007; Kwiatkowski et al., 2011; Andreas et al., 2013). More recently, encoder-decoder architectures have been applied with strong results (Dong and Lapata, 2016; Jia and Liang, 2016). Even for tasks with relatively small domains of entities, such as GEO and ATIS, it has been shown that some special consideration of entities within an encoder-decoder architecture is important to improve generalization. This has included extending decoders with copy mechanisms (Jia and Liang, 2016) and/or identifying entities in the input as a pre-processing step (Dong and Lapata, 2016). Other work has considered open domain tasks, such as WEBQUESTIONSSP (Yih et al., 2016). Recent approaches have typically relied on a separate entity linking model, such as S-MART (Yang and Chang, 2015), to provide a single disambiguated set of entities to consider. In principle, a learned entity linker could also serve as an entity candidate generator within our framework, although we do not explore such tasks in this work. Considerable recent work has focused on constrained decoding of various forms within an encoder-decoder architecture to leverage the known structure of the logical forms. This has led to approaches that leverage this structure during decoding, such as using tree decoders (Dong and Lapata, 2016; Alvarez-Melis and Jaakkola, 2017) or other mechanisms (Dong and Lapata, 2018; Goldman et al., 2017). Other approaches use grammar rules to constrain decoding (Xiao et al., 2016; Yin and Neubig, 2017; Krishnamurthy et al., 2017; Yu et al., 2018b). We leave investigation of such decoder constraints to future work. Many formulations of Graph Neural Networks (GNNs) that propagate information over local neighborhoods have recently been proposed (Li et al., 2016; Kipf and Welling, 2017; Gilmer et al., 2017; Veliˇckovi´c et al., 2018). Recent work has often focused on large graphs (Hamilton et al., 2017) and effectively propagating information over multiple graph steps (Xu et al., 2018). The graphs we consider are relatively small and are fullyconnected, avoiding some of the challenges posed by learning representations for large, sparsely connected graphs. Other recent work related to ours has considered GNNs for natural language tasks, such as combining structured and unstructured data for question answering (Sun et al., 2018), or for representing dependencies in tasks such as AMR parsing and machine translation (Beck et al., 2018; Bastings et al., 2017). The approach of Krishnamurthy et al. (2017) similarly considers ambiguous entity mentions jointly with query tokens for semantic parsing, although does not directly consider a GNN. Previous work has interpreted the Transformer’s self-attention mechanism as a GNN (Veliˇckovi´c et al., 2018; Battaglia et al., 2018), and extended it to consider relative positions as edge representations (Shaw et al., 2018). Previous work has also similarly represented edge labels as vectors, as opposed to matrices, in order to avoid over-parameterizing the model (Marcheggiani and Titov, 2017). 5 Experiments 5.1 Semantic Parsing Datasets We consider three semantic parsing datasets, with examples given in Table 1. GEO The GeoQuery dataset consists of natural language questions about US geography along with corresponding logical forms (Zelle and Mooney, 1996). We follow the convention of Zettlemoyer and Collins (2005) and use 600 training examples and 280 test examples. We use logical forms based on Functional Query Language (FunQL) (Kate et al., 2005). ATIS The Air Travel Information System (ATIS) dataset consists of natural language queries about travel planning (Dahl et al., 1994). We follow Zettlemoyer and Collins (2007) and use 4473 training examples, 448 test examples, and represent the logical forms as lambda expressions. SPIDER This is a large-scale text-to-SQL dataset that consists of 10,181 questions and 5,693 unique complex SQL queries across 200 database tables spanning 138 domains (Yu et al., 2018c). We use the standard training set of 8,659 training example and development set of 1,034 examples, split across different tables. 100 5.2 Experimental Setup Model Configuration We configured hyperparameters based on performance on the validation set for each task, if provided, otherwise crossvalidated on the training set. For the encoder and decoder, we selected the number of layers from {1, 2, 3, 4} and embedding and hidden dimensions from {64, 128, 256}, setting the feed forward layer hidden dimensions 4× higher. We employed dropout at training time with Pdropout selected from {0.1, 0.2, 0.3, 0.4, 0.5, 0.6}. We used 8 attention heads for each task. We used a clipping distance of 8 for relative position representations (Shaw et al., 2018). We used the Adam optimizer (Kingma and Ba, 2015) with β1 = 0.9, β2 = 0.98, and ϵ = 10−9, and tuned the learning rate for each task. We used the same warmup and decay strategy for learning rate as Vaswani et al. (2017), selecting a number of warmup steps up to a maximum of 3000. Early stopping was used to determine the total training steps for each task. We used the final checkpoint for evaluation. We batched training examples together, and selected batch size from {32, 64, 128, 256, 512}. During training we used masked self-attention (Vaswani et al., 2017) to enable parallel decoding of output sequences. For evaluation, we used greedy search. We used a simple strategy of splitting each input utterance on spaces to generate a sequence of tokens. We mapped any token that didn’t occur at least 2 times in the training dataset to a special outof-vocabulary token. For experiments that used BERT, we instead used the same wordpiece (Wu et al., 2016) tokenization as used for pre-training. BERT For some of our experiments, we evaluated incorporating a pre-trained BERT (Devlin et al., 2018) encoder by effectively using the output of the BERT encoder in place of a learned token embedding table. We then continue to use graph encoder and decoder layers with randomly initialized parameters in addition to BERT, so there are many parameters that are not pre-trained. The additional encoder layers are still necessary to condition on entities and relations. We achieved best results by freezing the pretrained parameters for an initial number of steps, and then jointly fine-tuning all parameters, similar to existing approaches for gradual unfreezing (Howard and Ruder, 2018). When unfreezing the pre-trained parameters, we restart the learning rate schedule. We found this to perform better than keeping pre-trained parameters either entirely frozen or entirely unfrozen during fine-tuning. We used BERTLARGE (Devlin et al., 2018), which has 24 layers. For fine tuning we used the same Adam optimizer with weight decay and learning rate decay as used for BERT pre-training. We reduced batch sizes to accommodate the significantly larger model size, and tuned learning rate, warm up steps, and number of frozen steps for pre-trained parameters. Entity Candidate Generator We use an entity candidate generator that, given x, can retrieve a set of potentially relevant entities, e, for the given domain. Although all generators share a common interface, their implementation varies across tasks. For GEO and ATIS we use a lexicon of entity aliases in the dataset and attempt to match with ngrams in the query. Each entity has a single attribute corresponding to the entity’s type. We used binary valued relations between entity candidates based on whether entity candidate spans overlap, but experiments did not show significant improvements from incorporating these relations. For SPIDER, we generalize our notion of entities to include tables and table columns. We include all relevant tables and columns as entity candidates, but make use of Levenshtein distance between query ngrams and table and column names to determine edges between tokens and entity candidates. We use attributes based on the types and names of tables and columns. Edges between entity candidates capture relations between columns and the table they belong to, and foreign key relations. For GEO, ATIS, and SPIDER, this leads to 19.5%, 32.7%, and 74.6% of examples containing at least one span associated with multiple entity candidates, respectively, indicating some entity ambiguity. Further details on how entity candidate generators were constructed are provided in § A.1. Output Sequences We pre-processed output sequences to identify entity argument values, and replaced those elements with references to entity candidates in the input. In cases where our entity candidate generator did not retrieve an entity that was used as an argument, we dropped the example from the training data set or considered it incorrect 101 Method GEO ATIS Kwiatkowski et al. (2013) 89.0 — Liang et al. (2013) 87.9 — Wang et al. (2014) — 91.3 Zhao and Huang (2015) 88.9 84.2 Jia and Liang (2016) 89.3 83.3 −data augmentation 85.0 76.3 Dong and Lapata (2016) † 87.1 84.6 Rabinovich et al. (2017) † 87.1 85.9 Dong and Lapata (2018) † 88.2 87.7 Ours GNN w/ edge matrices 82.5 84.6 GNN w/ edge vectors 89.3 87.1 GNN w/ edge vectors + BERT 92.5 89.7 Method SPIDER Xu et al. (2017) 10.9 Yu et al. (2018a) 8.0 Yu et al. (2018b) 24.8 −data augmentation 18.9 Ours GNN w/ edge matrices 29.3 GNN w/ edge vectors 32.1 GNN w/ edge vectors + BERT 23.5 Table 2: We report accuracies on GEO, ATIS, and SPIDER for various implementations of our GNN sub-layer. For GEO and ATIS, we use † to denote neural approaches that disambiguate and replace entities in the utterance as a pre-processing step. For SPIDER, the evaluation set consists of examples for databases unseen during training. if in the test set. Evaluation To evaluate accuracy, we use exact match accuracy relative to gold logical forms. For GEO we directly compare output symbols. For ATIS, we compare normalized logical forms using canonical variable naming and sorting for unordered arguments (Jia and Liang, 2016). For SPIDER we use the provided evaluation script, which decomposes each SQL query and conducts set comparison within each clause without values. All accuracies are reported on the test set, except for SPIDER where we report and compare accuracies on the development set. Copying Tokens To better understand the effect of conditioning on entities and their relations, we also conducted experiments that considered an alternative method for selecting and disambiguating entities similar to Jia et al. (2016). In this approach we use our model’s copy mechanism to copy tokens corresponding to the surface forms of entity arguments, rather than copying entities directly. P(aj = CopyToken[i] | x, y1:j−1) ∝ exp((zjWx)⊺w(Nenc) xi ), (9) where Wx is a learned matrix, and where i ∈ {1, . . . , |x|} refers to the index of token xi ∈Vin. If aj = CopyToken[i], then yj = xi. This allows us to ablate entity information in the input while still generating logical forms. When copying tokens, the decoder determines the type of the entity using an additional output symbol. For GEO, the actual entity can then be identified as a post-processing step, as a type and surface form is sufficient. For other tasks this could require a more complicated post-processing step to disambiguate entities given a surface form and type. Method GEO Copying Entities GNN w/ edge vectors + BERT 92.5 GNN w/ edge vectors 89.3 Copying Tokens GNN w/ edge vectors 87.9 −entity candidates, e 84.3 BERT 89.6 Table 3: Experimental results for copying tokens instead of entities when decoding, with and without conditioning on the set of entity candidates, e. 5.3 Results and Analysis Accuracies on GEO, ATIS, and SPIDER are shown in Table 2. GEO and ATIS Without pre-training, and despite adding a bit of entity ambiguity, we achieve similar results to other recent approaches that disambiguate and replace entities in the utterance as a pre-processing step during both training and evaluating (Dong and Lapata, 2016, 2018). When incorporating BERT, we increase absolute accuracies over Dong and Lapata (2018) on GEO and ATIS by 3.2% and 2.0%, respectively. Notably, 102 they also present techniques and results that leverage constrained decoding, which our approach would also likely further benefit from. For GEO, we find that when ablating all entity information in our model and copying tokens instead of entities, we achieve similar results as Jia and Liang (2016) when also ablating their data augmentation method, as shown in Table 3. This is expected, since when ablating entities completely, our architecture essentially reduces to the same sequence-to-sequence task setup. These results demonstrate the impact of conditioning on the entity candidates, as it improves performance even on the token copying setup. It appears that leveraging BERT can partly compensate for not conditioning on entity candidates, but combining BERT with our GNN approach and copying entities achieves 2.9% higher accuracy than using only a BERT encoder and copying tokens. For ATIS, our results are outperformed by Wang et al. (2014) by 1.6%. Their approach uses hand-engineered templates to build a CCG lexicon. Some of these templates attempt to handle the specific types of ungrammatical utterances in the ATIS task. SPIDER For SPIDER, a relatively new dataset, there is less prior work. Competitive approaches have been specific to the text-to-SQL task (Xu et al., 2017; Yu et al., 2018a,b), incorporating taskspecific methods to condition on table and column information, and incorporating SQL-specific structure when decoding. Our approach improves absolute accuracy by +7.3% relative to Yu et al. (2018b) without using any pre-trained language representations, or constrained decoding. Our approach could also likely benefit from some of the other aspects of Yu et al. (2018b) such as more structured decoding, data augmentation, and using pre-trained representations (they use GloVe (Pennington et al., 2014)) for tokens, columns, and tables. Our results were surprisingly worse when attempting to incorporate BERT. Of course, successfully incorporating pre-trained representations is not always straightforward. In general, we found using BERT within our architecture to be sensitive to learning rates and learning rate schedules. Notably, the evaluation setup for SPIDER is very different than training, as examples are for tables unseen during training. Models may not generalize well to unseen tables and columns. It’s likely that successfully incorporating BERT for SPIDER would require careful tuning of hyperparameters specifically for the database split configuration. Entity Spans and Relations Ablating span relations between entities and tokens for GEO and ATIS is shown in Table 4. The impact is more significant for ATIS, which contains many queries with multiple entities of the same type, such as nonstop flights seattle to boston where disambiguating the origin and destination entities requires knowledge of which tokens they are associated with, given that we represent entities based only on their types for these tasks. We leave for future work consideration of edges between entity candidates that incorporate relevant domain knowledge for these tasks. Edge Ablations GEO ATIS GNN w/ edge vectors 89.3 87.1 −entity span edges 88.6 34.2 Table 4: Results for ablating information about entity candidate spans for GEO and ATIS. For SPIDER, results ablating relations between entities and tokens, and relations between entities, are shown in Table 5. This demonstrates the importance of entity relations, as they include useful information for disambiguating entities such as which columns belong to which tables, and which columns have foreign key relations. Edge Ablations SPIDER GNN w/ edge vectors 32.1 −entity span edges 27.8 −entity relation edges 26.3 Table 5: Results for ablating information about relations between entity candidates and tokens for SPIDER. Edge Representations Using additive edge vectors outperforms using learned edge matrix transformations for implementing f, across all tasks. While the vector formulation is less expressive, it also introduces far fewer parameters per edge type, which can be an important consideration given that our graph contains many similar edge labels, such as those representing similar relative positions between tokens. We leave further exploration of more expressive edge representations to future work. Another direction to explore is a 103 heterogeneous formulation of the GNN sub-layer, that employs different formulations for different subsets of nodes, e.g. for tokens and entities. 6 Conclusions We have presented an architecture for semantic parsing that uses a Graph Neural Network (GNN) to condition on a graph of tokens, entities, and their relations. Experimental results have demonstrated that this approach can achieve competitive results across a diverse set of tasks, while also providing a conceptually simple way to incorporate entities and their relations during parsing. For future direction, we are interested in exploring constrained decoding, better incorporating pre-trained language representations within our architecture, conditioning on additional relations between entities, and different GNN formulations. More broadly, we have presented a flexible approach for conditioning on available knowledge in the form of entities and their relations, and demonstrated its effectiveness for semantic parsing. Acknowledgments We would like to thank Karl Pichotta, Zuyao Li, Tom Kwiatkowski, and Dipanjan Das for helpful discussions. Thanks also to Ming-Wei Chang and Kristina Toutanova for their comments, and to all who provided feedback in draft reading sessions. Finally, we are grateful to the anonymous reviewers for their useful feedback. References D. Alvarez-Melis and T. Jaakkola. 2017. Tree structured decoding with doubly recurrent neural networks. In International Conference on Learning Representations (ICLR). Jacob Andreas, Andreas Vlachos, and Stephen Clark. 2013. Semantic parsing as machine translation. In Proceedings of the 51st Annual Meeting of the Association for Computational Linguistics (Volume 2: Short Papers), volume 2, pages 47–52. Jimmy Lei Ba, Jamie Ryan Kiros, and Geoffrey E Hinton. 2016. Layer normalization. arXiv preprint arXiv:1607.06450. Dzmitry Bahdanau, Kyunghyun Cho, and Yoshua Bengio. 2014. Neural machine translation by jointly learning to align and translate. arXiv preprint arXiv:1409.0473. Joost Bastings, Ivan Titov, Wilker Aziz, Diego Marcheggiani, and Khalil Simaan. 2017. Graph convolutional encoders for syntax-aware neural machine translation. In Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing, pages 1957–1967. Peter W Battaglia, Jessica B Hamrick, Victor Bapst, Alvaro Sanchez-Gonzalez, Vinicius Zambaldi, Mateusz Malinowski, Andrea Tacchetti, David Raposo, Adam Santoro, Ryan Faulkner, et al. 2018. Relational inductive biases, deep learning, and graph networks. arXiv preprint arXiv:1806.01261. Daniel Beck, Gholamreza Haffari, and Trevor Cohn. 2018. Graph-to-sequence learning using gated graph neural networks. In Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 273–283. Jonathan Berant, Andrew Chou, Roy Frostig, and Percy Liang. 2013. Semantic parsing on freebase from question-answer pairs. In Proceedings of the 2013 Conference on Empirical Methods in Natural Language Processing, pages 1533–1544. Deborah A Dahl, Madeleine Bates, Michael Brown, William Fisher, Kate Hunicke-Smith, David Pallett, Christine Pao, Alexander Rudnicky, and Elizabeth Shriberg. 1994. Expanding the scope of the atis task: The atis-3 corpus. In Proceedings of the workshop on Human Language Technology, pages 43–48. Association for Computational Linguistics. Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2018. Bert: Pre-training of deep bidirectional transformers for language understanding. arXiv preprint arXiv:1810.04805. Li Dong and Mirella Lapata. 2016. Language to logical form with neural attention. In Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), volume 1, pages 33–43. Li Dong and Mirella Lapata. 2018. Coarse-to-fine decoding for neural semantic parsing. In ACL. Justin Gilmer, Samuel S Schoenholz, Patrick F Riley, Oriol Vinyals, and George E Dahl. 2017. Neural message passing for quantum chemistry. In International Conference on Machine Learning, pages 1263–1272. Omer Goldman, Veronica Latcinnik, Udi Naveh, Amir Globerson, and Jonathan Berant. 2017. Weaklysupervised semantic parsing with abstract examples. arXiv preprint arXiv:1711.05240. Jiatao Gu, Zhengdong Lu, Hang Li, and Victor OK Li. 2016. Incorporating copying mechanism in sequence-to-sequence learning. In Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), volume 1, pages 1631–1640. 104 Caglar Gulcehre, Sungjin Ahn, Ramesh Nallapati, Bowen Zhou, and Yoshua Bengio. 2016. Pointing the unknown words. In Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), volume 1, pages 140–149. Will Hamilton, Zhitao Ying, and Jure Leskovec. 2017. Inductive representation learning on large graphs. In Advances in Neural Information Processing Systems, pages 1024–1034. Jeremy Howard and Sebastian Ruder. 2018. Universal language model fine-tuning for text classification. arXiv preprint arXiv:1801.06146. Robin Jia and Percy Liang. 2016. Data recombination for neural semantic parsing. In Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), volume 1, pages 12–22. Rohit J Kate, Yuk Wah Wong, and Raymond J Mooney. 2005. Learning to transform natural to formal languages. In Proceedings of the National Conference on Artificial Intelligence, volume 20, page 1062. Menlo Park, CA; Cambridge, MA; London; AAAI Press; MIT Press; 1999. Diederik Kingma and Jimmy Ba. 2015. Adam: A method for stochastic optimization. In International Conference on Learning Representations (ICLR). Thomas N Kipf and Max Welling. 2017. Semisupervised classification with graph convolutional networks. In International Conference on Learning Representations (ICLR). Jayant Krishnamurthy, Pradeep Dasigi, and Matt Gardner. 2017. Neural semantic parsing with type constraints for semi-structured tables. In Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing, pages 1516–1526. Tom Kwiatkowski, Eunsol Choi, Yoav Artzi, and Luke Zettlemoyer. 2013. Scaling semantic parsers with on-the-fly ontology matching. In Proceedings of the 2013 conference on empirical methods in natural language processing, pages 1545–1556. Tom Kwiatkowski, Luke Zettlemoyer, Sharon Goldwater, and Mark Steedman. 2011. Lexical generalization in ccg grammar induction for semantic parsing. In Proceedings of the conference on empirical methods in natural language processing, pages 1512– 1523. Association for Computational Linguistics. Yujia Li, Daniel Tarlow, Marc Brockschmidt, and Richard Zemel. 2016. Gated graph sequence neural networks. In International Conference on Learning Representations (ICLR). Percy Liang, Michael I Jordan, and Dan Klein. 2013. Learning dependency-based compositional semantics. Computational Linguistics, 39(2):389–446. Wang Ling, Phil Blunsom, Edward Grefenstette, Karl Moritz Hermann, Tom´aˇs Koˇcisk`y, Fumin Wang, and Andrew Senior. 2016. Latent predictor networks for code generation. In Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), volume 1, pages 599–609. Diego Marcheggiani and Ivan Titov. 2017. Encoding sentences with graph convolutional networks for semantic role labeling. In Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing, pages 1506–1515. Jeffrey Pennington, Richard Socher, and Christopher Manning. 2014. Glove: Global vectors for word representation. In Proceedings of the 2014 conference on empirical methods in natural language processing (EMNLP), pages 1532–1543. Maxim Rabinovich, Mitchell Stern, and Dan Klein. 2017. Abstract syntax networks for code generation and semantic parsing. In Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), volume 1, pages 1139–1149. Franco Scarselli, Marco Gori, Ah Chung Tsoi, Markus Hagenbuchner, and Gabriele Monfardini. 2009. The graph neural network model. IEEE Transactions on Neural Networks, 20(1):61–80. Peter Shaw, Jakob Uszkoreit, and Ashish Vaswani. 2018. Self-attention with relative position representations. In Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 2 (Short Papers), volume 2, pages 464–468. Haitian Sun, Bhuwan Dhingra, Manzil Zaheer, Kathryn Mazaitis, Ruslan Salakhutdinov, and William Cohen. 2018. Open domain question answering using early fusion of knowledge bases and text. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, pages 4231– 4242. Lappoon R Tang and Raymond J Mooney. 2000. Automated construction of database interfaces: Integrating statistical and relational learning for semantic parsing. In Proceedings of the 2000 Joint SIGDAT conference on Empirical methods in natural language processing and very large corpora: held in conjunction with the 38th Annual Meeting of the Association for Computational Linguistics-Volume 13, pages 133–141. Association for Computational Linguistics. Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, Łukasz Kaiser, and Illia Polosukhin. 2017. Attention is all you need. In Advances in Neural Information Processing Systems, pages 5998–6008. 105 Petar Veliˇckovi´c, Guillem Cucurull, Arantxa Casanova, Adriana Romero, Pietro Li`o, and Yoshua Bengio. 2018. Graph attention networks. In International Conference on Learning Representations (ICLR). Oriol Vinyals, Meire Fortunato, and Navdeep Jaitly. 2015a. Pointer networks. In Advances in Neural Information Processing Systems, pages 2692–2700. Oriol Vinyals, Łukasz Kaiser, Terry Koo, Slav Petrov, Ilya Sutskever, and Geoffrey Hinton. 2015b. Grammar as a foreign language. In Advances in Neural Information Processing Systems, pages 2773–2781. Adrienne Wang, Tom Kwiatkowski, and Luke Zettlemoyer. 2014. Morpho-syntactic lexical generalization for ccg semantic parsing. In Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing (EMNLP), pages 1284–1295. Yonghui Wu, Mike Schuster, Zhifeng Chen, Quoc V Le, Mohammad Norouzi, Wolfgang Macherey, Maxim Krikun, Yuan Cao, Qin Gao, Klaus Macherey, et al. 2016. Google’s neural machine translation system: Bridging the gap between human and machine translation. arXiv preprint arXiv:1609.08144. Chunyang Xiao, Marc Dymetman, and Claire Gardent. 2016. Sequence-based structured prediction for semantic parsing. In Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), volume 1, pages 1341–1350. Keyulu Xu, Chengtao Li, Yonglong Tian, Tomohiro Sonobe, Ken-ichi Kawarabayashi, and Stefanie Jegelka. 2018. Representation learning on graphs with jumping knowledge networks. In International Conference on Learning Representations (ICLR). Xiaojun Xu, Chang Liu, and Dawn Song. 2017. Sqlnet: Generating structured queries from natural language without reinforcement learning. arXiv preprint arXiv:1711.04436. Yi Yang and Ming-Wei Chang. 2015. S-mart: Novel tree-based structured learning algorithms applied to tweet entity linking. In Proceedings of the 53rd Annual Meeting of the Association for Computational Linguistics and the 7th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), volume 1, pages 504–513. Wen-tau Yih, Matthew Richardson, Chris Meek, MingWei Chang, and Jina Suh. 2016. The value of semantic parse labeling for knowledge base question answering. In Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics (Volume 2: Short Papers), volume 2, pages 201–206. Pengcheng Yin and Graham Neubig. 2017. A syntactic neural model for general-purpose code generation. In Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), volume 1, pages 440–450. Tao Yu, Zifan Li, Zilin Zhang, Rui Zhang, and Dragomir Radev. 2018a. Typesql: Knowledgebased type-aware neural text-to-sql generation. In Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 2 (Short Papers), volume 2, pages 588–594. Tao Yu, Michihiro Yasunaga, Kai Yang, Rui Zhang, Dongxu Wang, Zifan Li, and Dragomir Radev. 2018b. Syntaxsqlnet: Syntax tree networks for complex and cross-domaintext-to-sql task. arXiv preprint arXiv:1810.05237. Tao Yu, Rui Zhang, Kai Yang, Michihiro Yasunaga, Dongxu Wang, Zifan Li, James Ma, Irene Li, Qingning Yao, Shanelle Roman, et al. 2018c. Spider: A large-scale human-labeled dataset for complex and cross-domain semantic parsing and text-to-sql task. arXiv preprint arXiv:1809.08887. John M Zelle and Raymond J Mooney. 1996. Learning to parse database queries using inductive logic programming. In Proceedings of the thirteenth national conference on Artificial intelligence-Volume 2, pages 1050–1055. AAAI Press. Luke S Zettlemoyer and Michael Collins. 2005. Learning to map sentences to logical form: structured classification with probabilistic categorial grammars. In Proceedings of the Twenty-First Conference on Uncertainty in Artificial Intelligence, pages 658–666. AUAI Press. Luke S Zettlemoyer and Michael Collins. 2007. Online learning of relaxed ccg grammars for parsing to logical form. EMNLP-CoNLL 2007, page 678. Kai Zhao and Liang Huang. 2015. Type-driven incremental semantic parsing with polymorphism. In Proceedings of the 2015 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 1416–1421. 106 A Supplemental Material A.1 Entity Candidate Generator Details In this section we provide details of how we constructed entity candidate generators for each task. GEO The annotator was constructed from the geobase database, which provides a list of geographical facts. For each entry in the database, we extracted the name as the entity alias and the type (e.g., “state”, “city”) as its attribute. Since not all cities used in the GEO query set are listed as explicit entries, we also used cities in the state entries. Finally, geobase has no entries around countries, so we added relevant aliases for “USA” with a special “country” attribute. There was 1 example where an entity in the logical form did not appear in the input, leading to the example being dropped from the training set. In lieu of task-specific edge relations, we used binary edge labels between entities that captured which annotations span the same tokens. However, experiments demonstrated that these edges did not significantly affect performance. We leave consideration of other types of entity relations for these tasks to future work. ATIS We constructed a lexicon mapping natural language entity aliases in the dataset (e.g., “newark international”, “5pm”) to unique entity identifiers (e.g. “ewr:ap”, “1700:ti”). For ATIS, this lexicon required some manual construction. Each entity identifier has a two-letter suffix (e.g., “ap”, “ti”) that maps it to a single attribute (e.g., “airport”, “time”). We allowed overlapping entity mentions when the entities referred to different entity identifiers. For instance, in the query span “atlanta airport”, we include both the city of Atlanta and the Atlanta airport. Notably there were 9 examples where one of the entities used as an argument in the logical form did not have a corresponding mention in the input utterance. From manual inspection, many of the dropped examples appear to have incorrectly annotated logical forms. These examples were dropped from training set or marked as incorrect if they appeared in the test set. We use the same binary edge labels between entities as for GEO. SPIDER For SPIDER we generalize our notion of entities to consider tables and columns as entities. We attempt to determine spans for each table and column by computing normalized Levenshtein distance between table and column names and unigrams or bigrams in the utterance. The best alignment having a score > 0.75 is selected, and we use these generated alignments to populate the edges between tokens and entity candidates. We generate a set of attributes for the table based on unigrams in the table name, and an attribute to identify the entity as a table. Likewise, for columns, we generate a set of attributes based on unigrams in the column name as well as an attribute to identify the value type of the column. We also include attributes indicating whether an alignment was found between the entity and the input text. We include 3 task-specific edge label types between entity candidates to denote bi-directional relations between column entities and the table entity they belong to, and to denote the presence of a foreign key relationship between columns.
2019
10
Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 1049–1058 Florence, Italy, July 28 - August 2, 2019. c⃝2019 Association for Computational Linguistics 1049 Searching for Effective Neural Extractive Summarization: What Works and What’s Next Ming Zhong∗, Pengfei Liu∗, Danqing Wang, Xipeng Qiu†, Xuanjing Huang Shanghai Key Laboratory of Intelligent Information Processing, Fudan University School of Computer Science, Fudan University 825 Zhangheng Road, Shanghai, China {mzhong18,pfliu14,dqwang18,xpqiu,xjhuang}@fudan.edu.cn Abstract The recent years have seen remarkable success in the use of deep neural networks on text summarization. However, there is no clear understanding of why they perform so well, or how they might be improved. In this paper, we seek to better understand how neural extractive summarization systems could benefit from different types of model architectures, transferable knowledge and learning schemas. Additionally, we find an effective way to improve current frameworks and achieve the state-ofthe-art result on CNN/DailyMail by a large margin based on our observations and analyses. Hopefully, our work could provide more clues for future research on extractive summarization. Source code will be available on Github1. 1 Introduction Recent years has seen remarkable success in the use of deep neural networks for text summarization (See et al., 2017; Celikyilmaz et al., 2018; Jadhav and Rajan, 2018). So far, most research utilizing the neural network for text summarization has revolved around architecture engineering (Zhou et al., 2018; Chen and Bansal, 2018; Gehrmann et al., 2018). Despite their success, it remains poorly understood why they perform well and what their shortcomings are, which limits our ability to design better architectures. The rapid development of neural architectures calls for a detailed empirical study of analyzing and understanding existing models. In this paper, we primarily focus on extractive summarization since they are computationally efficient, and can generate grammatically and coherent summaries (Nallapati et al., 2017). and seek to ∗These two authors contributed equally. †Corresponding author. 1https://github.com/fastnlp/fastNLP better understand how neural network-based approaches to this task could benefit from different types of model architectures, transferable knowledge, and learning schemas, and how they might be improved. Architectures Architecturally, the better performance usually comes at the cost of our understanding of the system. To date, we know little about the functionality of each neural component and the differences between them (Peters et al., 2018b), which raises the following typical questions: 1) How does the choice of different neural architectures (CNN, RNN, Transformer) influence the performance of the summarization system? 2) Which part of components matters for specific dataset? 3) Do current models suffer from the over-engineering problem? Understanding the above questions can not only help us to choose suitable architectures in different application scenarios, but motivate us to move forward to more powerful frameworks. External Transferable Knowledge and Learning schemas Clearly, the improvement in accuracy and performance is not merely because of the shift from feature engineering to structure engineering, but the flexible ways to incorporate external knowledge (Mikolov et al., 2013; Peters et al., 2018a; Devlin et al., 2018) and learning schemas to introduce extra instructive constraints (Paulus et al., 2017; Arumae and Liu, 2018). For this part, we make some first steps toward answers to the following questions: 1) Which type of pre-trained models (supervised or unsupervised pre-training) is more friendly to the summarization task? 2) When architectures are explored exhaustively, can we push the state-of-the-art results to a new level by introducing external transferable knowledge or changing another learning schema? To make a comprehensive study of above an1050 Perspective Content Sec.ID Learning Schemas Sup. & Reinforce. 4.4 Structure Dec. Pointer & SeqLab. 4.3.1 Enc. LSTM & Transformer 4.3.2 Knowledge Exter. GloVe BERT NEWS. 4.3.3 Inter. Random Table 1: Outline of our experimental design. Dec. and Enc. represent decoder and encoder respectively. Sup. denotes supervised learning and NEWS. means supervised pre-training knowledge. alytical perspectives, we first build a testbed for summarization system, in which training and testing environment will be constructed. In the training environment, we design different summarization models to analyze how they influence the performance. Specifically, these models differ in the types of architectures (Encoders: CNN, LSTM, Transformer (Vaswani et al., 2017); Decoders: auto-regressive2, non auto-regressive), external transferable knowledge (GloVe (Pennington et al., 2014), BERT (Devlin et al., 2018), NEWSROOM (Grusky et al., 2018)) and different learning schemas (supervised learning and reinforcement learning). To peer into the internal working mechanism of above testing cases, we provide sufficient evaluation scenarios in the testing environment. Concretely, we present a multi-domain test, sentence shuffling test, and analyze models by different metrics: repetition, sentence length, and position bias, which we additionally developed to provide a better understanding of the characteristics of different datasets. Empirically, our main observations are summarized as: 1) Architecturally speaking, models with autoregressive decoder are prone to achieving better performance against non auto-regressive decoder. Besides, LSTM is more likely to suffer from the architecture overfitting problem while Transformer is more robust. 2) The success of extractive summarization system on the CNN/DailyMail corpus heavily relies on the ability to learn positional information of the sentence. 3) Unsupervised transferable knowledge is more useful than supervised transferable knowl2Auto-regressive indicates that the decoder can make current prediction with knowledge of previous predictions. edge since the latter one is easily influenced by the domain shift problem. 4) We find an effective way to improve the current system, and achieving the state-of-the-art result on CNN/DailyMail by a large margin with the help of unsupervised transferable knowledge (42.39 R-1 score). And this result can be further enhanced by introducing reinforcement learning (42.69 R-1 score). Hopefully, this detailed empirical study can provide more hints for the follow-up researchers to design better architectures and explore new stateof-the-art results along a right direction. 2 Related Work The work is connected to the following threads of work of NLP research. Task-oriented Neural Networks Interpreting Without knowing the internal working mechanism of the neural network, it is easy for us to get into a hobble when the performance of a task has reached the bottleneck. More recently, Peters et al. (2018b) investigate how different learning frameworks influence the properties of learned contextualized representations. Different from this work, in this paper, we focus on dissecting the neural models for text summarization. A similar work to us is Kedzie et al. (2018), which studies how deep learning models perform context selection in terms of several typical summarization architectures, and domains. Compared with this work, we make a more comprehensive study and give more different analytic aspects. For example, we additionally investigate how transferable knowledge influence extractive summarization and a more popular neural architecture, Transformer. Besides, we come to inconsistent conclusions when analyzing the auto-regressive decoder. More importantly, our paper also shows how existing systems can be improved, and we have achieved a state-of-the-art performance on CNN/DailyMail. Extractive Summarization Most of recent work attempt to explore different neural components or their combinations to build an end-to-end learning model. Specifically, these work instantiate their encoder-decoder framework by choosing recurrent neural networks (Cheng and Lapata, 2016; Nallapati et al., 2017; Zhou et al., 2018) as encoder, auto-regressive decoder (Chen and 1051 Bansal, 2018; Jadhav and Rajan, 2018; Zhou et al., 2018) or non auto-regressive decoder (Isonuma et al., 2017; Narayan et al., 2018; Arumae and Liu, 2018) as decoder, based on pre-trained word representations (Mikolov et al., 2013; Pennington et al., 2014). However, how to use Transformer in extractive summarization is still a missing issue. In addition, some work uses reinforcement learning technique (Narayan et al., 2018; Wu and Hu, 2018; Chen and Bansal, 2018), which can provide more direct optimization goals. Although above work improves the performance of summarization system from different perspectives, yet a comprehensive study remains missing. 3 A Testbed for Text Summarization To analyze neural summarization system, we propose to build a Training-Testing environment, in which different text cases (models) are firstly generated under different training settings, and they are further evaluated under different testing settings. Before the introduction of our Train-Testing testbed, we first give a description of text summarization. 3.1 Task Description Existing methods of extractive summarization directly choose and output the salient sentences (or phrases) in the original document. Formally, given a document D = d1, · · · , dn consisting of n sentences, the objective is to extract a subset of sentences R = r1, · · · , rm from D, m is deterministic during training while is a hyper-parameter in testing phase. Additionally, each sentence contains |di| words di = x1, · · · , x|di|. Generally, most of existing extractive summarization systems can be abstracted into the following framework, consisting of three major modules: sentence encoder, document encoder and decoder. At first, a sentence encoder will be utilized to convert each sentence di into a sentential representation di. Then these sentence representations will be contextualized by a document encoder to si. Finally, a decoder will extract a subset of sentences based on these contextualized sentence representations. 3.2 Setup for Training Environment The objective of this step is to provide typical and diverse testing cases (models) in terms of model architectures, transferable knowledge and learning schemas. 3.2.1 Sentence Encoder We instantiate our sentence encoder with CNN layer (Kim, 2014). We don’t explore other options as sentence encoder since strong evidence of previous work (Kedzie et al., 2018) shows that the differences of existing sentence encoder don’t matter too much for final performance. 3.2.2 Document Encoder Given a sequence of sentential representation d1, · · · , dn, the duty of document encoder is to contextualize each sentence therefore obtaining the contextualized representations s1, · · · , sn. To achieve this goal, we investigate the LSTM-based structure and the Transformer structure, both of which have proven to be effective and achieved the state-of-the-art results in many other NLP tasks. Notably, to let the model make the best of its structural bias, stacking deep layers is allowed. LSTM Layer Long short-term memory network (LSTM) was proposed by (Hochreiter and Schmidhuber, 1997) to specifically address this issue of learning long-term dependencies, which has proven to be effective in a wide range of NLP tasks, such as text classification (Liu et al., 2017, 2016b), semantic matching (Rockt¨aschel et al., 2015; Liu et al., 2016a), text summarization (Rush et al., 2015) and machine translation (Sutskever et al., 2014). Transformer Layer Transformer (Vaswani et al., 2017) is essentially a feed-forward selfattention architecture, which achieves pairwise interaction by attention mechanism. Recently, Transformer has achieved great success in many other NLP tasks (Vaswani et al., 2017; Dai et al., 2018), and it is appealing to know how this neural module performs on text summarization task. 3.2.3 Decoder Decoder is used to extract a subset of sentences from the original document based on contextualized representations: s1, · · · , sn. Most existing architecture of decoders can divide into autoregressive and non auto-regressive versions, both of which are investigated in this paper. Sequence Labeling (SeqLab) The models, which formulate extractive summarization task as a sequence labeling problem, are equipped with non auto-regressive decoder. Formally, given a 1052 document D consisting of n sentences d1, · · · , dn, the summaries are extracted by predicting a sequence of label y1, · · · , yn (yi ∈{0, 1}) for the document, where yi = 1 represents the i-th sentence in the document should be included in the summaries. Pointer Network (Pointer) As a representative of auto-regressive decoder, pointer network-based decoder has shown superior performance for extractive summarization (Chen and Bansal, 2018; Jadhav and Rajan, 2018). Pointer network selects the sentence by attention mechanism using glimpse operation (Vinyals et al., 2015). When it extracts a sentence, pointer network is aware of previous predictions. 3.2.4 External transferable knowledge The success of neural network-based models on NLP tasks cannot only be attributed to the shift from feature engineering to structural engineering, but the flexible ways to incorporate external knowledge (Mikolov et al., 2013; Peters et al., 2018a; Devlin et al., 2018). The most common form of external transferable knowledge is the parameters pre-trained on other corpora. To investigate how different pre-trained models influence the summarization system, we take the following pre-trained knowledge into consideration. Unsupervised transferable knowledge Two typical unsupervised transferable knowledge are explored in this paper: context independent word embeddings (Mikolov et al., 2013; Pennington et al., 2014) and contextualized word embeddings (Peters et al., 2018a; Devlin et al., 2018), have put the state-of-the-art results to new level on a large number of NLP taks recently. Supervised pre-trained knowledge Besides unsupervised pre-trained knowledge, we also can utilize parameters of networks pre-trained on other summarization datasets. The value of this investigation is to know transferability between different dataset. To achieve this, we first pre-train our model on the NEWSROOM dataset (Grusky et al., 2018), which is one of the largest datasets and contains samples from different domains. Then, we fine-tune our model on target domains that we investigate. 3.2.5 Learning Schemas Utilizing external knowledge provides a way to seek new state-of-the-art results from the perspective of introducing extra data. Additionally, an alternative way is resorting to change the learning schema of the model. In this paper, we also explore how different learning schemas influence extractive summarization system by comparing supervised learning and reinforcement learning. 3.3 Setup for Testing Environment In the testing environment, we provide sufficient evaluation scenarios to get the internal working mechanism of testing models. Next, we will make a detailed deception. ROUGE Following previous work in text summarization, we evaluate the performance of different architectures with the standard ROUGE-1, ROUGE-2 and ROUGE-L F1 scores (Lin, 2004) by using pyrouge package3. Cross-domain Evaluation We present a multidomain evaluation, in which each testing model will be evaluated on multi-domain datasets based on CNN/DailyMail and NEWSROOM. Detail of the multi-domain datasets is descried in Tab. 2. Repetition We design repetition score to test how different architectures behave diversely on avoiding generating unnecessary lengthy and repeated information. We use the percentage of repeated n-grams in extracted summary to measure the word-level repetition, which can be calculated as: REPn = CountUniq(ngram) Count(ngram) (1) where Count is used to count the number of ngrams and Uniq is used to eliminate n-gram duplication. The closer the word-based repetition score is to 1, the lower the repeatability of the words in summary. Positional Bias It is meaningful to study whether the ground truth distribution of the datasets is different and how it affects different architectures. To achieve this we design a positional bias to describe the uniformity of ground truth distribution in different datasets, which can be calcu3pypi.python.org/pypi/pyrouge/0.1.3 1053 lated as: PosBias = k X i=1 −p(i) log(p(i)) (2) We divide each article into k parts (we choose k = 30 because articles from CNN/DailyMail and NEWSROOM have 30 sentences by average) and p(i) denotes the probability that the first golden label is in part i of the articles. Sentence Length Sentence length will affect different metrics to some extent. We count the average length of the k-th sentence extracted from different decoders to explore whether the decoder could perceive the length information of sentences. Sentence Shuffling We attempt to explore the impact of sentence position information on different structures. Therefore, we shuffle the orders of sentences and observe the robustness of different architectures to out-of-order sentences. 4 Experiment 4.1 Datasets Instead of evaluating model solely on a single dataset, we care more about how our testing models perform on different types of data, which allows us to know if current models suffer from the over-engineering problem. Domains Train Valid Test CNN/DailyMail 287,227 13,368 11,490 NYTimes 152,981 16,490 16,624 WashingtonPost 96,775 10,103 10,196 FoxNews 78,795 8,428 8,397 TheGuardian 58,057 6,376 6,273 NYDailyNews 55,653 6,057 5,904 WSJ 49,968 5,449 5,462 USAToday 44,921 4,628 4,781 Table 2: Statistics of multi-domain datasets based on CNN/DailyMail and NEWSROOM. CNN/DailyMail The CNN/DailyMail question answering dataset (Hermann et al., 2015) modified by (Nallapati et al., 2016) is commonly used for summarization. The dataset consists of online news articles with paired human-generated summaries (3.75 sentences on average). For the data prepossessing, we use the data with nonanonymized version as (See et al., 2017), which doesn’t replace named entities. NEWSROOM Recently, NEWSROOM is constructed by (Grusky et al., 2018), which contains 1.3 million articles and summaries extracted from 38 major news publications across 20 years. We regard this diversity of sources as a diversity of summarization styles and select seven publications with the largest number of data as different domains to do the cross-domain evaluation. Due to the large scale data in NEWSROOM, we also choose this dataset to do transfer experiment. 4.2 Training Settings For different learning schemas, we utilize cross entropy loss function and reinforcement learning method close to Chen and Bansal (2018) with a small difference: we use the precision of ROUGE1 as a reward for every extracted sentence instead of the F1 value of ROUGE-L. For context-independent word representations (GloVe, Word2vec), we directly utilize them to initialize our words of each sentence, which can be fine-tuned during the training phase. For BERT, we truncate the article to 512 tokens and feed it to a feature-based BERT (without gradient), concatenate the last four layers and get a 128-dimensional token embedding after passing through a MLP. 4.3 Experimental Observations and Analysis Next, we will show our findings and analyses in terms of architectures and external transferable knowledge. 4.3.1 Analysis of Decoders We understand the differences between decoder Pointer and SeqLab by probing their behaviours in different testing environments. Domains From Tab. 3, we can observe that models with pointer-based decoder are prone to achieving better performance against SeqLabbased decoder. Specifically, among these eight datasets, models with pointer-based decoder outperform SeqLab on six domains and achieves comparable results on the other two domains. For example, in “NYTimes”, “WashingtonPost” and “TheGuardian” domains, Pointer surpasses SeqLab by at least 1.0 improvment (R-1). 1054 Model R-1 R-2 R-L R-1 R-2 R-L R-1 R-2 R-L R-1 R-2 R-L Dec. Enc. CNN/DM (2/3) NYTimes (2) WashingtonPost (1) Foxnews (1) Lead 40.11 17.64 36.32 28.75 16.10 25.16 22.21 11.40 19.41 54.20 46.60 51.89 Oracle 55.24 31.14 50.96 52.17 36.10 47.68 42.91 27.11 39.42 73.54 65.50 71.46 SeqLab LSTM 41.22 18.72 37.52 30.26 17.18 26.58 21.27 10.78 18.56 59.32 51.82 56.95 Transformer 41.31 18.85 37.63 30.03 17.01 26.37 21.74 10.92 18.92 59.35 51.82 56.97 Pointer LSTM 41.56 18.77 37.83 31.31 17.28 27.23 24.16 11.84 20.67 59.53 51.89 57.08 Transformer 41.36 18.59 37.67 31.34 17.25 27.16 23.77 11.63 20.48 59.35 51.68 56.90 Dec. Enc. TheGuardian (1) NYDailyNews (1) WSJ (1) USAToday (1) Lead 22.51 7.69 17.78 45.26 35.53 42.70 39.63 27.72 36.10 29.44 18.92 26.65 Oracle 41.08 21.49 35.80 73.99 64.80 72.09 57.15 43.06 53.27 47.17 33.40 44.02 SeqLab LSTM 23.02 8.12 18.29 53.13 43.52 50.53 41.94 29.54 38.19 30.30 18.96 27.40 Transformer 23.49 8.43 18.65 53.66 44.19 51.07 42.98 30.22 39.02 30.97 19.77 28.03 Pointer LSTM 24.71 8.55 19.30 53.31 43.37 50.52 43.29 30.20 39.12 31.73 19.89 28.50 Transformer 24.86 8.66 19.45 54.30 44.70 51.67 43.30 30.17 39.07 31.95 20.11 28.78 Table 3: Results of different architectures over different domains, where Enc. and Dec. represent document encoder and decoder respectively. Lead means to extract the first k sentences as the summary, usually as a competitive lower bound. Oracle represents the ground truth extracted by the greedy algorithm (Nallapati et al., 2017), usually as the upper bound. The number k in parentheses denotes k sentences are extracted during testing and choose lead-k as a lower bound for this domain. All the experiments use word2vec to obtain word representations. We attempt to explain this difference from the following three perspectives. Repetition For domains that need to extract multiple sentences as the summary (first two domains in Tab. 3), Pointer is aware of the previous prediction which makes it to reduce the duplication of n-grams compared to SeqLab. As shown in Fig. 1(a), models with Pointer always get higher repetition scores than models with SeqLab when extracting six sentences, which indicates that Pointer does capture word-level information from previous selected sentences and has positive effects on subsequent decisions. Positional Bias For domains that only need to extract one sentence as the summary (last six domains in Tab. 3), Pointer still performs better than SeqLab. As shown in Fig. 1(b), the performance gap between these two decoders grows as the positional bias of different datasets increases. For example, from the Tab. 3, we can see in the domains with low-value positional bias, such as “FoxNews(1.8)”, “NYDailyNews(1.9)”, SeqLab achieves closed performance against Pointer. By contrast, the performance gap grows when processing these domains with highvalue positional bias (“TheGuardian(2.9)”, “WashingtonPost(3.0)”). Consequently, SeqLab is more sensitive to positional bias, which impairs its performance on some datasets. Sentence length We find Pointer shows the ability to capture sentence length information based on previous predictions, while SeqLab doesn’t. We can see from the Fig. 1(c) that models with Pointer tend to choose longer sentences as the first sentence and greatly reduce the length of the sentence in the subsequent extractions. In comparison, it seems that models with SeqLab tend to extract sentences with similar length. The ability allows Pointer to adaptively change the length of the extracted sentences, thereby achieving better performance regardless of whether one sentence or multiple sentences are required. 4.3.2 Analysis of Encoders In this section, we make the analysis of two encoders LSTM and Transformer in different testing environments. Domains From Tab. 3, we get the following observations: 1) Transformer can outperform LSTM on some datasets “NYDailyNews” by a relatively large margin while LSTM beats Transformer on some domains with closed improvements. Besides, during different training phases of these eight domains, the hyper-parameters of Transformer keep unchanged4 while for LSTM, many sets of hyperparameters are used5. 44 layers 512 dimensions for Pointer and 12 layers 512 dimensions for SeqLab 5the number of layers searches in (2, 4, 6, 8) and dimen1055 REP2 REP3 REP4 REP5 0.9 0.95 Score SLSTM STransformer PLSTM PTransformer (a) Repetition score 1.8 1.9 2.3 2.6 2.9 3.0 0 0.5 1 1.5 2 2.5 Positional Bias ∆R R-1 R-2 R-L (b) Positional bias 1 2 3 4 20 25 30 35 #Sent Avg. Length SLSTM STransformer PLSTM PTransformer (c) Average length Figure 1: Different behaviours of two decoders (SeqLab and Pointer) under different testing environment. (a) shows repetition scores of different architectures when extracting six sentences on CNN/DailyMail. (b) shows the relationship between ∆R and positional bias. The abscissa denotes the positional bias of six different datasets and ∆R denotes the average ROUGE difference between the two decoders under different encoders. (c) shows average length of k-th sentence extracted from different architectures. R-1 R-2 R-L 10 15 20 ∆R (%) LSTM Transformer Figure 2: Results of different document encoders with Pointer on normal and shuffled CNN/DailyMail. ∆R denotes the decrease of performance when the sentences in document are shuffled. Above phenomena suggest that LSTM easily suffers from the architecture overfitting problem compared with Transformer. Additionally, in our experimental setting, Transformer is more efficient to train since it is two or three times faster than LSTM. 2) When equipped with SeqLab decoder, Transformer always obtains a better performance compared with LSTM, the reason we think is due to the non-local bias (Wang et al., 2018) of Transformer. Shuffled Testing In this settings, we shuffle the orders of sentences in training set while test set keeps unchanged. We compare two models with different encoders (LSTM, Transformer) and the results can be seen in Fig. 2. Generally, there is significant drop of performance about these two models. However, Transformer obtains lower decrease against LSTM, suggesting that Transformer sion searches in (512, 1024, 2048) α β R-1 R-2 R-L 1 0 37.90 15.69 34.31 √ d 1 40.93 18.49 37.24 1 1 41.31 18.85 37.63 1 √ d 40.88 18.42 37.19 0 1 40.39 17.67 36.54 Nallapati et al. (2017) 39.6 16.2 35.3 Narayan et al. (2018) 40.2 18.2 36.6 Table 4: Results of Transformer with SeqLab using different proportions of sentence embedding and positional embedding on CNN/DailyMail. The input of Transformer is α ∗sentence embedding plus β ∗positional embedding6. The bottom half of the table contains models that have similar performance with Transformer that only know positional information. are more robust. Disentangling Testing Transformer provides us an effective way to disentangle position and content information, which enables us to design a specific experiment, investigating what role positional information plays. As shown in Tab. 4, we dynamically regulate the ratio between sentence embedding and positional embedding by two coefficients α and β. Surprisingly, we find even only utilizing positional embedding (the model is only told how many sentences the document contains), our model can achieve 40.08 on R-1, which is comparable to many existing models. By 6In Vaswani et al. (2017), the input of Transformer is √ d ∗word embedding plus positional embedding, so we design the above different proportions to carry out the disentangling test. 1056 Model R-1 R-2 R-L R-1 R-2 R-L R-1 R-2 R-L R-1 R-2 R-L Dec. Enc. Baseline + GloVe + BERT + NEWSROOM SeqLab LSTM 41.22 18.72 37.52 41.33 18.78 37.64 42.18 19.64 38.53 41.48 18.95 37.78 Transformer 41.31 18.85 37.63 40.19 18.67 37.51 42.28 19.73 38.59 41.32 18.83 37.63 Pointer LSTM 41.56 18.77 37.83 41.15 18.38 37.43 42.39 19.51 38.69 41.35 18.59 37.61 Transformer 41.36 18.59 37.67 41.10 18.38 37.41 42.09 19.31 38.41 41.54 18.73 37.83 Table 5: Results of different architectures with different pre-trained knowledge on CNN/DailyMail, where Enc. and Dec. represent document encoder and decoder respectively. contrast, once the positional information is removed, the performance dropped by a large margin. This experiment shows that the success of such extractive summarization heavily relies on the ability of learning the positional information on CNN/DailyMail, which has been a benchmark dataset for most of current work. 4.3.3 Analysis of Transferable Knowledge Next, we show how different types of transferable knowledge influences our summarization models. Unsupervised Pre-training Here, as a baseline, word2vec is used to obtain word representations solely based on the training set of CNN/DailyMail. As shown in Tab. 5, we can find that contextindependent word representations can not contribute much to current models. However, when the models are equipped with BERT, we are excited to observe that the performances of all types of architectures are improved by a large margin. Specifically, the model CNN-LSTM-Pointer has achieved a new state-of-the-art with 42.11 on R-1, surpassing existing models dramatically. Supervised Pre-training In most cases, our models can benefit from the pre-trained parameters learned from the NEWSROOM dataset. However, the model CNN-LSTM-Pointer fails and the performance are decreased. We understand this phenomenon by the following explanations: The transferring process from CNN/DailyMail to NEWSROOM suffers from the domain shift problem, in which the distribution of golden labels’ positions are changed. And the observation from Fig. 2 shows that CNN-LSTM-Pointer is more sensitive to the ordering change, therefore obtaining a lower performance. Why does BERT work? We investigate two different ways of using BERT to figure out from Models R-1 R-2 R-L Chen and Bansal (2018) 41.47 18.72 37.76 Dong et al. (2018) 41.50 18.70 37.60 Zhou et al. (2018) 41.59 19.01 37.98 Jadhav and Rajan (2018)7 41.60 18.30 37.70 LSTM + PN 41.56 18.77 37.83 LSTM + PN + RL 41.85 18.93 38.13 LSTM + PN + BERT 42.39 19.51 38.69 LSTM + PN + BERT + RL 42.69 19.60 38.85 Table 6: Evaluation on CNN/DailyMail. The top half of the table is currently state-of-the-art models, and the lower half is our models. where BERT has brought improvement for extractive summarization system. In the first usage, we feed each individual sentence to BERT to obtain sentence representation, which does not contain contextualized information, and the model gets a high R-1 score of 41.7. However, when we feed the entire article to BERT to obtain token representations and get the sentence representation through mean pooling, model performance soared to 42.3 R-1 score. The experiment indicates that though BERT can provide a powerful sentence embedding, the key factor for extractive summarization is contextualized information and this type of information bears the positional relationship between sentences, which has been proven to be critical to extractive summarization task as above. 4.4 Learning Schema and Complementarity Besides supervised learning, in text summarization, reinforcement learning has been recently used to introduce more constraints. In this paper, we also explore if several advanced techniques be complementary with each other. We first choose the based model 7trained and evaluated on the anonymized version. 1057 LSTM-Pointer and LSTM-Pointer + BERT, then the reinforcement learning are introduced aiming to further optimize our models. As shown in Tab. 6, we observe that even though the performance of LSTM+PN has been largely improved by BERT, when applying reinforcement learning, the performance can be improved further, which indicates that there is indeed a complementarity between architecture, transferable knowledge and reinforcement learning. 5 Conclusion In this paper, we seek to better understand how neural extractive summarization systems could benefit from different types of model architectures, transferable knowledge, and learning schemas. Our detailed observations can provide more hints for the follow-up researchers to design more powerful learning frameworks. Acknowledgment We thank Jackie Chi Kit Cheung, Peng Qian for useful comments and discussions. We would like to thank the anonymous reviewers for their valuable comments. The research work is supported by National Natural Science Foundation of China (No. 61751201 and 61672162), Shanghai Municipal Science and Technology Commission (16JC1420401 and 17JC1404100), Shanghai Municipal Science and Technology Major Project(No.2018SHZDZX01)and ZJLab. References Kristjan Arumae and Fei Liu. 2018. Reinforced extractive summarization with question-focused rewards. In Proceedings of ACL 2018, Student Research Workshop. pages 105–111. Asli Celikyilmaz, Antoine Bosselut, Xiaodong He, and Yejin Choi. 2018. Deep communicating agents for abstractive summarization. In Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long Papers). volume 1, pages 1662–1675. Yen-Chun Chen and Mohit Bansal. 2018. Fast abstractive summarization with reinforce-selected sentence rewriting. In Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers). volume 1, pages 675–686. Jianpeng Cheng and Mirella Lapata. 2016. Neural summarization by extracting sentences and words. In Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers). volume 1, pages 484–494. Zihang Dai, Zhilin Yang, Yiming Yang, William W Cohen, Jaime Carbonell, Quoc V Le, and Ruslan Salakhutdinov. 2018. Transformer-xl: Language modeling with longer-term dependency . Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2018. Bert: Pre-training of deep bidirectional transformers for language understanding. arXiv preprint arXiv:1810.04805 . Yue Dong, Yikang Shen, Eric Crawford, Herke van Hoof, and Jackie Chi Kit Cheung. 2018. Banditsum: Extractive summarization as a contextual bandit. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing. pages 3739–3748. Sebastian Gehrmann, Yuntian Deng, and Alexander Rush. 2018. Bottom-up abstractive summarization. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing. pages 4098–4109. Max Grusky, Mor Naaman, and Yoav Artzi. 2018. Newsroom: A dataset of 1.3 million summaries with diverse extractive strategies. In Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long Papers). volume 1, pages 708–719. Karl Moritz Hermann, Tomas Kocisky, Edward Grefenstette, Lasse Espeholt, Will Kay, Mustafa Suleyman, and Phil Blunsom. 2015. Teaching machines to read and comprehend. In Advances in Neural Information Processing Systems. pages 1693– 1701. Sepp Hochreiter and J¨urgen Schmidhuber. 1997. Long short-term memory. Neural computation 9(8):1735–1780. Masaru Isonuma, Toru Fujino, Junichiro Mori, Yutaka Matsuo, and Ichiro Sakata. 2017. Extractive summarization using multi-task learning with document classification. In Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing. pages 2101–2110. Aishwarya Jadhav and Vaibhav Rajan. 2018. Extractive summarization with swap-net: Sentences and words from alternating pointer networks. In Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers). volume 1, pages 142–151. Chris Kedzie, Kathleen McKeown, and Hal Daume III. 2018. Content selection in deep learning models of summarization. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing. pages 1818–1828. 1058 Yoon Kim. 2014. Convolutional neural networks for sentence classification. arXiv preprint arXiv:1408.5882 . Chin-Yew Lin. 2004. Rouge: A package for automatic evaluation of summaries. Text Summarization Branches Out . Pengfei Liu, Xipeng Qiu, Jifan Chen, and Xuanjing Huang. 2016a. Deep fusion lstms for text semantic matching. In Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers). volume 1, pages 1034–1043. Pengfei Liu, Xipeng Qiu, and Xuanjing Huang. 2016b. Recurrent neural network for text classification with multi-task learning. In Proceedings of IJCAI. pages 2873–2879. Pengfei Liu, Xipeng Qiu, and Xuanjing Huang. 2017. Adversarial multi-task learning for text classification. In Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers). volume 1, pages 1–10. Tomas Mikolov, Kai Chen, Greg Corrado, and Jeffrey Dean. 2013. Efficient estimation of word representations in vector space. arXiv preprint arXiv:1301.3781 . Ramesh Nallapati, Feifei Zhai, and Bowen Zhou. 2017. Summarunner: A recurrent neural network based sequence model for extractive summarization of documents. In Thirty-First AAAI Conference on Artificial Intelligence. Ramesh Nallapati, Bowen Zhou, Cicero dos Santos, C¸ a glar Gulc¸ehre, and Bing Xiang. 2016. Abstractive text summarization using sequence-to-sequence rnns and beyond. CoNLL 2016 page 280. Shashi Narayan, Shay B Cohen, and Mirella Lapata. 2018. Ranking sentences for extractive summarization with reinforcement learning. In Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long Papers). volume 1, pages 1747–1759. Romain Paulus, Caiming Xiong, and Richard Socher. 2017. A deep reinforced model for abstractive summarization. arXiv preprint arXiv:1705.04304 . Jeffrey Pennington, Richard Socher, and Christopher Manning. 2014. Glove: Global vectors for word representation. In Proceedings of the 2014 conference on empirical methods in natural language processing (EMNLP). pages 1532–1543. Matthew Peters, Mark Neumann, Mohit Iyyer, Matt Gardner, Christopher Clark, Kenton Lee, and Luke Zettlemoyer. 2018a. Deep contextualized word representations. In Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long Papers). volume 1, pages 2227–2237. Matthew Peters, Mark Neumann, Luke Zettlemoyer, and Wen-tau Yih. 2018b. Dissecting contextual word embeddings: Architecture and representation. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing. pages 1499–1509. Tim Rockt¨aschel, Edward Grefenstette, Karl Moritz Hermann, Tom´aˇs Koˇcisk`y, and Phil Blunsom. 2015. Reasoning about entailment with neural attention. arXiv preprint arXiv:1509.06664 . Alexander M Rush, Sumit Chopra, and Jason Weston. 2015. A neural attention model for abstractive sentence summarization. In Proceedings of the 2015 Conference on Empirical Methods in Natural Language Processing. pages 379–389. Abigail See, Peter J Liu, and Christopher D Manning. 2017. Get to the point: Summarization with pointergenerator networks. In Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers). volume 1, pages 1073–1083. Ilya Sutskever, Oriol Vinyals, and Quoc VV Le. 2014. Sequence to sequence learning with neural networks. In Advances in Neural Information Processing Systems. pages 3104–3112. Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, Łukasz Kaiser, and Illia Polosukhin. 2017. Attention is all you need. In Advances in Neural Information Processing Systems. pages 5998–6008. Oriol Vinyals, Samy Bengio, and Manjunath Kudlur. 2015. Order matters: Sequence to sequence for sets. arXiv preprint arXiv:1511.06391 . Xiaolong Wang, Ross Girshick, Abhinav Gupta, and Kaiming He. 2018. Non-local neural networks. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. pages 7794–7803. Yuxiang Wu and Baotian Hu. 2018. Learning to extract coherent summary via deep reinforcement learning. In Thirty-Second AAAI Conference on Artificial Intelligence. Qingyu Zhou, Nan Yang, Furu Wei, Shaohan Huang, Ming Zhou, and Tiejun Zhao. 2018. Neural document summarization by jointly learning to score and select sentences. In Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers). volume 1, pages 654–663.
2019
100
Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 1059–1073 Florence, Italy, July 28 - August 2, 2019. c⃝2019 Association for Computational Linguistics 1059 A Simple Theoretical Model of Importance for Summarization Maxime Peyrard∗ EPFL [email protected] Abstract Research on summarization has mainly been driven by empirical approaches, crafting systems to perform well on standard datasets with the notion of information Importance remaining latent. We argue that establishing theoretical models of Importance will advance our understanding of the task and help to further improve summarization systems. To this end, we propose simple but rigorous definitions of several concepts that were previously used only intuitively in summarization: Redundancy, Relevance, and Informativeness. Importance arises as a single quantity naturally unifying these concepts. Additionally, we provide intuitions to interpret the proposed quantities and experiments to demonstrate the potential of the framework to inform and guide subsequent works. 1 Introduction Summarization is the process of identifying the most important information from a source to produce a comprehensive output for a particular user and task (Mani, 1999). While producing readable outputs is a problem shared with the field of Natural Language Generation, the core challenge of summarization is the identification and selection of important information. The task definition is rather intuitive but involves vague and undefined terms such as Importance and Information. Since the seminal work of Luhn (1958), automatic text summarization research has focused on empirical developments, crafting summarization systems to perform well on standard datasets leaving the formal definitions of Importance latent (Das and Martins, 2010; Nenkova and McKeown, 2012). This view entails collecting datasets, defining evaluation metrics and iteratively selecting the best-performing systems either via super∗Research partly done at UKP Lab from TU Darmstadt. vised learning or via repeated comparison of unsupervised systems (Yao et al., 2017). Such solely empirical approaches may lack guidance as they are often not motivated by more general theoretical frameworks. While these approaches have facilitated the development of practical solutions, they only identify signals correlating with the vague human intuition of Importance. For instance, structural features like centrality and repetitions are still among the most used proxies for Importance (Yao et al., 2017; Kedzie et al., 2018). However, such features just correlate with Importance in standard datasets. Unsurprisingly, simple adversarial attacks reveal their weaknesses (Zopf et al., 2016). We postulate that theoretical models of Importance are beneficial to organize research and guide future empirical works. Hence, in this work, we propose a simple definition of information importance within an abstract theoretical framework. This requires the notion of information, which has received a lot of attention since the work from Shannon (1948) in the context of communication theory. Information theory provides the means to rigorously discuss the abstract concept of information, which seems particularly well suited as an entry point for a theory of summarization. However, information theory concentrates on uncertainty (entropy) about which message was chosen from a set of possible messages, ignoring the semantics of messages (Shannon, 1948). Yet, summarization is a lossy semantic compression depending on background knowledge. In order to apply information theory to summarization, we assume texts are represented by probability distributions over so-called semantic units (Bao et al., 2011). This view is compatible with the common distributional embedding representation of texts rendering the presented framework applicable in practice. When applied 1060 to semantic symbols, the tools of information theory indirectly operate at the semantic level (Carnap and Bar-Hillel, 1953; Zhong, 2017). Contributions: • We define several concepts intuitively connected to summarization: Redundancy, Relevance and Informativeness. These concepts have been used extensively in previous summarization works and we discuss along the way how our framework generalizes them. • From these definitions, we formulate properties required from a useful notion of Importance as the quantity unifying these concepts. We provide intuitions to interpret the proposed quantities. • Experiments show that, even under simplifying assumptions, these quantities correlates well with human judgments making the framework promising in order to guide future empirical works. 2 Framework 2.1 Terminology and Assumptions We call semantic unit an atomic piece of information (Zhong, 2017; Cruse, 1986). We note Ωthe set of all possible semantic units. A text X is considered as a semantic source emitting semantic units as envisioned by Weaver (1953) and discussed by Bao et al. (2011). Hence, we assume that X can be represented by a probability distribution PX over the semantic units Ω. Possible interpretations: One can interpret PX as the frequency distribution of semantic units in the text. Alternatively, PX(ωi) can be seen as the (normalized) likelihood that a text X entails an atomic information ωi (Carnap and Bar-Hillel, 1953). Another interpretation is to view PX(ωi) as the normalized contribution (utility) of ωi to the overall meaning of X (Zhong, 2017). Motivation for semantic units: In general, existing semantic information theories either postulate or imply the existence of semantic units (Carnap and Bar-Hillel, 1953; Bao et al., 2011; Zhong, 2017). For example, the Theory of Strongly Semantic Information produced by Floridi (2009) implies the existence of semantic units (called information units in his work). Building on this, Tsvetkov (2014) argued that the original theory of Shannon can operate at the semantic level by relying on semantic units. In particular, existing semantic information theories imply the existence of semantic units in formal semantics (Carnap and Bar-Hillel, 1953), which treat natural languages as formal languages (Montague, 1970). In general, lexical semantics (Cruse, 1986) also postulates the existence of elementary constituents called minimal semantic constituents. For instance, with frame semantics (Fillmore, 1976), frames can act as semantic units. Recently, distributional semantics approaches have received a lot of attention (Turian et al., 2010; Mikolov et al., 2013b). They are based on the distributional hypothesis (Harris, 1954) and the assumption that meaning can be encoded in a vector space (Turney and Pantel, 2010; Erk, 2010). These approaches also search latent and independent components that underlie the behavior of words (G´abor et al., 2017; Mikolov et al., 2013a). While different approaches to semantics postulate different basic units and different properties for them, they have in common that meaning arises from a set of independent and discrete units. Thus, the semantic units assumption is general and has minimal commitment to the actual nature of semantics. This makes the framework compatible with most existing semantic representation approaches. Each approach specifies these units and can be plugged in the framework, e.g., frame semantics would define units as frames, topic models (Allahyari et al., 2017) would define units as topics and distributional representations would define units as dimensions of a vector space. In the following paragraphs, we represent the source document(s) D and a candidate summary S by their respective distributions PD and PS.1 2.2 Redundancy Intuitively, a summary should contain a lot of information. In information-theoretic terms, the amount of information is measured by Shannon’s 1We sometimes note X instead of PX when it is not ambiguous 1061 entropy. For a summary S represented by PS: H(S) = − X ωi PS(ωi) · log(PS(ωi)) (1) H(S) is maximized for a uniform probability distribution when every semantic unit is present only once in S: ∀(i, j), PS(ωi) = PS(ωj). Therefore, we define Redundancy, our first quantity relevant to summarization, via entropy: Red(S) = Hmax −H(S) (2) Since Hmax = log |Ω| is a constant indepedent of S, we can simply write: Red(S) = −H(S). Redundancy in Previous Works: By definition, entropy encompasses the notion of maximum coverage. Low redundancy via maximum coverage is the main idea behind the use of submodularity (Lin and Bilmes, 2011). Submodular functions are generalizations of coverage functions which can be optimized greedily with guarantees that the result would not be far from optimal (Fujishige, 2005). Thus, they have been used extensively in summarization (Sipos et al., 2012; Yogatama et al., 2015). Otherwise, low redundancy is usually enforced during the extraction/generation procedures like MMR (Carbonell and Goldstein, 1998). 2.3 Relevance Intuitively, observing a summary should reduce our uncertainty about the original text. A summary approximates the original source(s) and this approximation should incur a minimum loss of information. This property is usually called Relevance. Here, estimating Relevance boils down to comparing the distributions PS and PD, which is done via the cross-entropy Rel(S, D) = −CE(S, D): Rel(S, D) = X ωi PS(ωi) · log(PD(ωi)) (3) The cross-entropy is interpreted as the average surprise of observing S while expecting D. A summary with a low expected surprise produces a low uncertainty about what were the original sources. This is achieved by exhibiting a distribution of semantic units similar to the one of the source documents: PS ≈PD. Furthermore, we observe the following connection with Redundancy: KL(S||D) = CE(S, D) −H(S) −KL(S||D) = Rel(S, D) −Red(S) (4) KL divergence is the information loss incurred by using D as an approximation of S (i.e., the uncertainty about D arising from observing S instead of D). A summarizer that minimizes the KL divergence minimizes Redundancy while maximizing Relevance. In fact, this is an instance of the Kullback Minimum Description Principle (MDI) (Kullback and Leibler, 1951), a generalization of the Maximum Entropy Principle (Jaynes, 1957): the summary minimizing the KL divergence is the least biased (i.e., least redundant or with highest entropy) summary matching D. In other words, this summary fits D while inducing a minimum amount of new information. Indeed, any new information is necessarily biased since it does not arise from observations in the sources. The MDI principle and KL divergence unify Redundancy and Relevance. Relevance in Previous Works: Relevance is the most heavily studied aspect of summarization. In fact, by design, most unsupervised systems model Relevance. Usually, they used the idea of topical frequency where the most frequent topics from the sources must be extracted. Then, different notions of topics and counting heuristics have been proposed. We briefly discuss these developments here. Luhn (1958) introduced the simple but influential idea that sentences containing the most important words are most likely to embody the original document. Later, Nenkova et al. (2006) showed experimentally that humans tend to use words appearing frequently in the sources to produce their summaries. Then, Vanderwende et al. (2007) developed the system SumBasic, which scores each sentence by the average probability of its words. The same ideas can be generalized to n-grams. A prominent example is the ICSI system (Gillick and Favre, 2009) which extracts frequent bigrams. Despite being rather simple, ICSI produces strong and still close to state-of-the-art summaries (Hong et al., 2014). Different but similar words may refer to the same topic and should not be counted separately. 1062 This observation gave rise to a set of important techniques based on topic models (Allahyari et al., 2017). These approaches cover sentence clustering (McKeown et al., 1999; Radev et al., 2000; Zhang et al., 2015), lexical chains (Barzilay and Elhadad, 1999), Latent Semantic Analysis (Deerwester et al., 1990) or Latent Dirichlet Allocation (Blei et al., 2003) adapted to summarization (Hachey et al., 2006; Daum´e III and Marcu, 2006; Wang et al., 2009; Davis et al., 2012). Approaches like hLDA can exploit repetitions both at the word and at the sentence level (Celikyilmaz and Hakkani-Tur, 2010). Graph-based methods form another particularly powerful class of techniques to estimate the frequency of topics, e.g., via the notion of centrality (Mani and Bloedorn, 1997; Mihalcea and Tarau, 2004; Erkan and Radev, 2004). A significant body of research was dedicated to tweak and improve various components of graph-based approaches. For example, one can investigate different similarity measures (Chali and Joty, 2008). Also, different weighting schemes between sentences have been investigated (Leskovec et al., 2005; Wan and Yang, 2006). Therefore, in existing approaches, the topics (i.e., atomic units) were words, n-grams, sentences or combinations of these. The general idea of preferring frequent topics based on various counting heuristics is formalized by cross-entropy. Indeed, requiring the summary to minimize the crossentropy with the source documents implies that frequent topics in the sources should be extracted first. An interesting line of work is based on the assumption that the best sentences are the ones that permit the best reconstruction of the input documents (He et al., 2012). It was refined by a stream of works using distributional similarities (Li et al., 2015; Liu et al., 2015; Ma et al., 2016). There, the atomic units are the dimensions of the vector spaces. This information bottleneck idea is also neatly captured by the notion of cross-entropy which is a measure of information loss. Alternatively, (Daum´e and Marcu, 2002) viewed summarization as a noisy communication channel which is also rooted in information theory ideas. (Wilson and Sperber, 2008) provide a more general and less formal discussion of relevance in the context of Relevance Theory (Lavrenko, 2008). 2.4 Informativeness Relevance still ignores other potential sources of information such as previous knowledge or preconceptions. We need to further extend the contextual boundary. Intuitively, a summary is informative if it induces, for a user, a great change in her knowledge about the world. Therefore, we introduce K, the background knowledge (or preconceptions about the task). K is represented by a probability distribution PK over semantic units Ω. Formally, the amount of new information contained in a summary S is given by the crossentropy Inf(S, K) = CE(S, K): Inf(S, K) = − X ωi PS(ωi) · log(PK(ωi)) (5) For Relevance the cross-entropy between S and D should be low. However, for Informativeness, the cross-entropy between S and K should be high because we measure the amount of new information induced by the summary in our knowledge. Background knowledge is modeled by assigning a high probability to known semantic units. These probabilities correspond to the strength of ωi in the user’s memory. A simple model could be the uniform distribution over known information: PK(ωi) is 1 n if the user knows ωi, and 0 otherwise. However, K can control other variants of the summarization task: A personalized Kp models the preferences of a user by setting low probabilities to the semantic units of interest. Similarly, a query Q can be encoded by setting low probability to semantic units related to Q. Finally, there is a natural formulation of update summarization. Let U and D be two sets of documents. Update summarization consists in summarizing D given that the user has already seen U. This is modeled by setting K = U, considering U as previous knowledge. Informativeness in Previous Works: The modelization of Informativeness has received less attention by the summarization community. The problem of identifying stopwords originally faced by Luhn (1958) could be addressed by developments in the field of information retrieval using background corpora like TF·IDF (Sparck Jones, 1972). Based on the same intuition, Dunning (1993) outlined an alternative way of identifying highly descriptive words: the loglikelihood ratio test. Words identified with such 1063 techniques are known to be useful in news summarization (Harabagiu and Lacatusu, 2005). Furthermore, Conroy et al. (2006) proposed to model background knowledge by a large random set of news articles. In update summarization, Delort and Alfonseca (2012) used Bayesian topic models to ensure the extraction of informative summaries. Louis (2014) investigated background knowledge for update summarization with Bayesian surprise. This is comparable to the combination of Informativeness and Redundancy in our framework when semantic units are ngrams. Thus, previous approaches to Informativeness generally craft an alternate background distribution to model the a-priori importance of units. Then, units from the document rare in the background are preferred, which is captured by maximizing the cross-entropy between the summary and K. Indeed, unfrequent units in the background would be preferred in the summary because they would be surprising (i.e., informative) to an average user. 2.5 Importance Since Importance is a measure that guides which choices to make when discarding semantic units, we must devise a way to encode their relative importance. Here, this means finding a probability distribution unifying D and K by encoding expectations about which semantic units should appear in a summary. Informativeness requires a biased summary (w.r.t. K) and Relevance requires an unbiased summary (w.r.t. D). Thus, a summary should, by using only information available in D, produce what brings the most new information to a user with knowledge K. This could formalize a common intuition in summarization that units frequent in the source(s) but rare in the background are important. Formally, let di = PD(ωi) be the probability of the unit ωi in the source D. Similarly, we note ki = PK(ωi). We seek a function f(di, ki) encoding the importance of unit ωi. We formulate simple requirements that f should satisfy: • Informativeness: ∀i ̸= j, if di = dj and ki > kj then f(di, ki) < f(dj, kj) • Relevance: ∀i ̸= j, if di > dj and ki = kj then f(di, ki) > f(dj, kj) • Additivity: I(f(di, ki)) ≡αI(di) + βI(ki) (I is the information measure from Shannon’s theory (Shannon, 1948)) • Normalization: P i f(di, ki) = 1 The first requirement states that, for two semantic units equally represented in the sources, we prefer the more informative one. The second requirement is an analogous statement for Relevance. The third requirement is a consistency constraint to preserve additivity of the information measures (Shannon, 1948). The fourth requirement ensures that f is a valid distribution. Theorem 1. The functions satisfying the previous requirements are of the form: P D K (ωi) = 1 C · dα i kβ i (6) C = X i dα i kβ i , α, β ∈R+ (7) C is the normalizing constant. The parameters α and β represent the strength given to Relevance and Informativeness respectively which is made clearer by equation (11). The proof is provided in appendix B. Summary scoring function: By construction, a candidate summary should approximate P D K , which encodes the relative importance of semantic units. Furthermore, the summary should be non-redundant (i.e., high entropy). These two requirements are unified by the Kullback MDI principle: The least biased summary S∗that best approximates the distribution P D K is the solution of: S∗= argmax S θI = argmin S KL(S||P D K ) (8) Thus, we note θI as the quantity that scores summaries: θI(S, D, K) = −KL(PS, ||P D K ) (9) Interpretation of P D K : P D K can be viewed as an importance-encoding distribution because it encodes the relative importance of semantic units and gives an overall target for the summary. For example, if a semantic unit ωi is prominent in D (PD(ωi) is high) and not known in K (PD(ωi) is low), then P D K (ωi) is very high, 1064 which means very desired in the summary. Indeed, choosing this unit will fill the gap in the knowledge K while matching the sources. Figure 1 illustrates how this distribution behaves with respect to D and K (for α = β = 1). Summarizability: The target distribution P D K may exhibit different properties. For example, it might be clear which semantic units should be extracted (i.e., a spiky probability distribution) or it might be unclear (i.e., many units have more or less the same importance score). This can be quantified by the entropy of the importance-encoding distribution: H D K = H(P D K ) (10) Intuitively, this measures the number of possibly good summaries. If H D K is low then P D S is spiky and there is little uncertainty about which semantic units to extract (few possible good summaries). Conversely, if the entropy is high, many equivalently good summaries are possible. Interpretation of θI: To better understand θI, we remark that it can be expressed in terms of the previously defined quantities: θI(S, D, K) ≡−Red(S) + αRel(S, D) (11) + βInf(S, K) (12) Equality holds up to a constant term log C independent from S. Maximizing θI is equivalent to maximizing Relevance and Informativeness while minimizing Redundancy. Their relative strength are encoded by α and β. Finally, H(S), CE(S, D) and CE(S, K) are the three independent components of Importance. It is worth noting that each previously defined quantity: Red, Rel and Inf are measured in bits (using base 2 for the logarithm). Then, θI is also an information measure expressed in bits. Shannon (1948) initially axiomatized that information quantities should be additive and therefore θI arising as the sum of other information quantities is unsurprising. Moreover, we ensured additivity with the third requirement of P D K . 2.6 Potential Information Relevance relates S and D, Informativeness relates S and K, but we can also connect D and K. Intuitively, we can extract a lot of new information from D only when K and D are different. With the same argument laid out for Informativeness, we can define the amount of potential information as the average surprise of observing D while already knowing K. Again, this is given by the cross-entropy PIK(D) = CE(D, K): PIK(D) = − X ωi PD(ωi) · log(PK(ωi)) (13) Previously, we stated that a summary should aim, using only information from D, to offer the maximum amount of new information with respect to K. PIK(D) can be understood as Potential Information or maximum Informativeness, the maximum amount of new information that a summary can extract from D while knowing K. A summary S cannot extract more than PIK(D) bits of information (if using only information from D). 3 Experiments 3.1 Experimental setup To further illustrate the workings of the formula, we provide examples of experiments done with a simplistic choice for semantic units: words. Even with simple assumptions θI is a meaningful quantity which correlates well with human judgments. Data: We experiment with standard datasets for two different summarization tasks: generic and update multi-document summarization. We use two datasets from the Text Analysis Conference (TAC) shared task: TAC-2008 and TAC-2009.2 In the update part, 10 new documents (B documents) are to be summarized assuming that the first 10 documents (A documents) have already been seen. The generic task consists in summarizing the initial document set (A). For each topic, there are 4 human reference summaries and a manually created Pyramid set (Nenkova et al., 2007). In both editions, all system summaries and the 4 reference summaries were manually evaluated by NIST assessors for readability, content selection (with Pyramid) and overall responsiveness. At the time of the shared tasks, 57 systems were submitted to TAC-2008 and 55 to TAC-2009. 2http://tac.nist.gov/2009/ Summarization/, http://tac.nist.gov/2008/ 1065 (a) ditribution PD (b) distribution PK (c) distribution P D K Figure 1: figure 1a represents an example distribution of sources, figure 1b an example distribution of background knowledge and figure 1c is the resulting target distribution that summaries should approximate. Setup and Assumptions: To keep the experiments simple and focused on illustrating the formulas, we make several simplistic assumptions. First, we choose words as semantic units and therefore texts are represented as frequency distributions over words. This assumption was already employed by previous works using information-theoretic tools for summarization (Haghighi and Vanderwende, 2009). While it is limiting, this remains a simple approximation letting us observe the quantities in action. K, α and β are the parameters of the theory and their choice is subject to empirical investigation. Here, we make simple choices: for update summarization, K is the frequency distribution over words in the background documents (A). For generic summarization, K is the uniform probability distribution over all words from the source documents. Furthermore, we use α = β = 1. 3.2 Correlation with humans First, we measure how well the different quantities correlate with human judgments. We compute the score of each system summary according to each quantity defined in the previous section: Red, Rel, Inf, θI(S, D, K). We then compute the correlations between these scores and the manual Pyramid scores. Indeed, each quantity is a summary scoring function and could, therefore, be evaluated based on its ability to correlate with human judgments (Lin and Hovy, 2003). Thus, we also report the performances of the summary scoring functions from several standard baselines: Edmundson (Edmundson, 1969) which scores sentences based on 4 methods: term frequency, presence of cue-words, overlap with title and position of the sentence. LexRank (Erkan and Radev, 2004) is a popular graph-based approach which scores sentences based on their centrality in a sentence similarity graph. ICSI (Gillick and Favre, 2009) extracts a summary by solving a maximum coverage problem considering the most frequent bigrams in the source documents. KL and JS (Haghighi and Vanderwende, 2009) which measure the divergence between the distribution of words in the summary and in the sources. Furthermore, we report two baselines from Louis (2014) which account for background knowledge: KLback and JSback which measure the divergence between the distribution of the summary and the background knowledge K. Further details concerning baseline scoring functions can be found in appendix A. We measure the correlations with Kendall’s τ, a rank correlation metric which compares the orders induced by both scored lists. We report results for both generic and update summarization averaged over all topics for both datasets in table 1. In general, the modelizations of Relevance (based only on the sources) correlate better with human judgments than other quantities. Metrics accounting for background knowledge work better in the update scenario. This is not surprising as the background knowledge K is more meaningful in this case (using the previous document set). We observe that JS divergence gives slightly better results than KL. Even though KL is more theoretically appealing, JS is smoother and usually works better in practice when distributions have different supports (Louis and Nenkova, 2013). Finally, θI significantly3 outperforms all baselines in both the generic and the update case. Red, Rel and Inf are not particularly strong on their own, but combined together they yield a strong summary scoring function θI. Indeed, each quantity models only one aspect of content selection, only together they form a strong signal for Importance. 3at 0.01 with significance testing done with a t-test to compare two means 1066 We need to be careful when interpreting these results because we made several strong assumptions: by choosing n-grams as semantic units and by choosing K rather arbitrarily. Nevertheless, these are promising results. By investigating better text representations and more realistic K, we should expect even higher correlations. We provide a qualitative example on one topic in appendix C with a visualization of P D K in comparison to reference summaries. Generic Update ICSI .178 .139 Edm. .215 .205 LexRank .201 .164 KL .204 .176 JS .225 .189 KLback .110 .167 JSback .066 .187 Red .098 .096 Rel .212 .192 Inf .091 .086 θI .294 .211 Table 1: Correlation of various information-theoretic quantities with human judgments measured by Kendall’s τ on generic and update summarization. 3.3 Comparison with Reference Summaries Intuitively, the distribution P D K should be similar to the probability distribution PR of the humanwritten reference summaries. To verify this, we scored the system summaries and the reference summaries with θI and checked whether there is a significant difference between the two lists.4 We found that θI scores reference summaries significantly higher than system summaries. The p−value, for the generic case, is 9.2e−6 and 1.1e−3 for the update case. Both are much smaller than the 1e−2 significance level. Therefore, θI is capable of distinguishing systems summaries from human written ones. For comparison, the best baseline (JS) has the following p−values: 8.2e−3 (Generic) and 4.5e−2 (Update). It does not pass the 1e−2 significance level for the update scenario. 4with standard t-test for comparing two related means. 4 Conclusion and Future Work In this work, we argued for the development of theoretical models of Importance and proposed one such framework. Thus, we investigated a theoretical formulation of the notion of Importance. In a framework rooted in information theory, we formalized several summary-related quantities like: Redundancy, Relevance and Informativeness. Importance arises as the notion unifying these concepts. More generally, Importance is the measure that guides which choices to make when information must be discarded. The introduced quantities generalize the intuitions that have previously been used in summarization research. Conceptually, it is straightforward to build a system out of θI once a semantic units representation and a K have been chosen. A summarizer intends to extract or generate a summary maximizing θI. This fits within the general optimization framework for summarization (McDonald, 2007; Peyrard and Eckle-Kohler, 2017b; Peyrard and Gurevych, 2018) The background knowledge and the choice of semantic units are free parameters of the theory. They are design choices which can be explored empirically by subsequent works. Our experiments already hint that strong summarizers can be developed from this framework. Characters, character n-grams, morphemes, words, n-grams, phrases, and sentences do not actually qualify as semantic units. Even though previous works who relied on information theoretic motivation (Lin et al., 2006; Haghighi and Vanderwende, 2009; Louis and Nenkova, 2013; Peyrard and EckleKohler, 2016) used some of them as support for probability distributions, they are neither atomic nor independent. It is mainly because they are surface forms whereas semantic units are abstract and operate at the semantic level. However, they might serve as convenient approximations. Then, interesting research questions arise like Which granularity offers a good approximation of semantic units? Can we automatically learn good approximations? N-grams are known to be useful, but other granularities have rarely been considered together with information-theoretic tools. For the background knowledge K, a promising direction would be to use the framework to actually learn it from data. In particular, one can apply supervised techniques to automatically search for K, α and β: finding the values of these parame1067 ters such that θI has the best correlation with human judgments. By aggregating over many users and many topics one can find a generic K: what, on average, people consider as known when summarizing a document. By aggregating over different people but in one domain, one can uncover a domain-specific K. Similarly, by aggregating over many topics for one person, one would find a personalized K. These consistute promising research directions for future works. Acknowledgements This work was partly supported by the German Research Foundation (DFG) as part of the Research Training Group “Adaptive Preparation of Information from Heterogeneous Sources” (AIPHES) under grant No. GRK 1994/1, and via the German-Israeli Project Cooperation (DIP, grant No. GU 798/17-1). We also thank the anonymous reviewers for their comments. References Mehdi Allahyari, Seyedamin Pouriyeh, Mehdi Assefi, Saeid Safaei, Elizabeth D. Trippe, Juan B. Gutierrez, and Krys Kochut. 2017. Text Summarization Techniques: A Brief Survey. International Journal of Advanced Computer Science and Applications, 8(10). Jie Bao, Prithwish Basu, Mike Dean, Craig Partridge, Ananthram Swami, Will Leland, and James A Hendler. 2011. Towards a theory of semantic communication. In Network Science Workshop (NSW), 2011 IEEE, pages 110–117. IEEE. Regina Barzilay and Michael Elhadad. 1999. Using Lexical Chains for Text Summarization. Advances in Automatic Text Summarization, pages 111–121. David M. Blei, Andrew Y. Ng, and Michael I. Jordan. 2003. Latent Dirichlet Allocation. Journal of Machine Learning Research, 3:993–1022. Jaime Carbonell and Jade Goldstein. 1998. The Use of MMR, Diversity-based Reranking for Reordering Documents and Producing Summaries. In Proceedings of the 21st Annual International ACM SIGIR Conference on Research and Development in Information Retrieval, SIGIR ’98, pages 335–336. Rudolf Carnap and Yehoshua Bar-Hillel. 1953. An Outline of a Theory of Semantic Information. British Journal for the Philosophy of Science., 4. Asli Celikyilmaz and Dilek Hakkani-Tur. 2010. A Hybrid Hierarchical Model for Multi-Document Summarization. In Proceedings of the 48th Annual Meeting of the Association for Computational Linguistics, pages 815–824, Uppsala, Sweden. Association for Computational Linguistics. Yllias Chali and Shafiq R. Joty. 2008. Improving the performance of the random walk model for answering complex questions. In Proceedings of the 46th Annual Meeting of the Association for Computational Linguistics on Human Language Technologies: Short Papers, pages 9–12. Association for Computational Linguistics. John M. Conroy, Judith D. Schlesinger, and Dianne P. O’Leary. 2006. Topic-Focused MultiDocument Summarization Using an Approximate Oracle Score. In Proceedings of the COLING/ACL 2006 Main Conference Poster Sessions, pages 152– 159, Sydney, Australia. Association for Computational Linguistics. D.A. Cruse. 1986. Lexical Semantics. Cambridge University Press, Cambridge, UK. Dipanjan Das and Andr´e F. T. Martins. 2010. A Survey on Automatic Text Summarization. Literature Survey for the Language and Statistics II Course at CMU. Hal Daum´e, III and Daniel Marcu. 2002. A Noisychannel Model for Document Compression. In Proceedings of the 40th Annual Meeting on Association for Computational Linguistics, pages 449–456. Hal Daum´e III and Daniel Marcu. 2006. Bayesian Query-Focused Summarization. In Proceedings of the 21st International Conference on Computational Linguistics and the 44th annual meeting of the Association for Computational Linguistics, pages 305– 312. Association for Computational Linguistics. Sashka T. Davis, John M. Conroy, and Judith D. Schlesinger. 2012. OCCAMS–An Optimal Combinatorial Covering Algorithm for Multi-document Summarization. In Proceeding of the 12th International Conference on Data Mining Workshops (ICDMW), pages 454–463. IEEE. Scott Deerwester, Susan T. Dumais, George W. Furnas, Thomas K. Landauer, and Richard Harshman. 1990. Indexing by Latent Semantic Analysis. Journal of the American Society for Information Science, 41(6):391–407. Jean-Yves Delort and Enrique Alfonseca. 2012. DualSum: A Topic-model Based Approach for Update Summarization. In Proceedings of the 13th Conference of the European Chapter of the Association for Computational Linguistics, pages 214–223. Ted Dunning. 1993. Accurate Methods for the Statistics of Surprise and Coincidence. Computational linguistics, 19(1):61–74. H. P. Edmundson. 1969. New Methods in Automatic Extracting. Journal of the Association for Computing Machinery, 16(2):264–285. 1068 Katrin Erk. 2010. What is Word Meaning, Really? (and How Can Distributional Models Help Us Describe It?). In Proceedings of the 2010 workshop on geometrical models of natural language semantics, pages 17–26. Association for Computational Linguistics. G¨unes Erkan and Dragomir R. Radev. 2004. LexRank: Graph-based Lexical Centrality As Salience in Text Summarization. Journal of Artificial Intelligence Research, pages 457–479. Charles J. Fillmore. 1976. Frame Semantics And the Nature of Language. Annals of the New York Academy of Sciences, 280(1):20–32. Luciano Floridi. 2009. Philosophical Conceptions of Information. In Formal Theories of Information, pages 13–53. Springer. Satoru Fujishige. 2005. Submodular functions and optimization. Annals of discrete mathematics. Elsevier, Amsterdam, Boston, Paris. Kata G´abor, Haifa Zargayouna, Isabelle Tellier, Davide Buscaldi, and Thierry Charnois. 2017. Exploring Vector Spaces for Semantic Relations. In Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing, pages 1814–1823, Copenhagen, Denmark. Association for Computational Linguistics. Dan Gillick and Benoit Favre. 2009. A Scalable Global Model for Summarization. In Proceedings of the Workshop on Integer Linear Programming for Natural Language Processing, pages 10–18, Boulder, Colorado. Association for Computational Linguistics. Ben Hachey, Gabriel Murray, and David Reitter. 2006. Dimensionality Reduction Aids Term CoOccurrence Based Multi-Document Summarization. In Proceedings of the Workshop on Task-Focused Summarization and Question Answering, pages 1– 7. Association for Computational Linguistics. Aria Haghighi and Lucy Vanderwende. 2009. Exploring Content Models for Multi-document Summarization. In Proceedings of Human Language Technologies: The 2009 Annual Conference of the North American Chapter of the Association for Computational Linguistics, pages 362–370. Sanda Harabagiu and Finley Lacatusu. 2005. Topic Themes for Multi-document Summarization. In Proceedings of the 28th Annual International ACM SIGIR Conference on Research and Development in Information Retrieval, pages 202–209. Zellig Harris. 1954. Distributional structure. Word, 10:146–162. Zhanying He, Chun Chen, Jiajun Bu, Can Wang, Lijun Zhang, Deng Cai, and Xiaofei He. 2012. Document Summarization Based on Data Reconstruction. In Proceeding of the Twenty-Sixth Conference on Artificial Intelligence. Kai Hong, John Conroy, benoit Favre, Alex Kulesza, Hui Lin, and Ani Nenkova. 2014. A Repository of State of the Art and Competitive Baseline Summaries for Generic News Summarization. In Proceedings of the Ninth International Conference on Language Resources and Evaluation (LREC’14), pages 1608–1616, Reykjavik, Iceland. Edwin T. Jaynes. 1957. Information Theory and Statistical Mechanics. Physical Review, 106:620–630. Chris Kedzie, Kathleen McKeown, and Hal Daume III. 2018. Content Selection in Deep Learning Models of Summarization. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, pages 1818–1828. Association for Computational Linguistics. Solomon Kullback and Richard A. Leibler. 1951. On Information and Sufficiency. The Annals of Mathematical Statistics, 22(1):79–86. Victor Lavrenko. 2008. A generative theory of relevance, volume 26. Springer Science & Business Media. Jure Leskovec, Natasa Milic-Frayling, and Marko Grobelnik. 2005. Impact of Linguistic Analysis on the Semantic Graph Coverage and Learning of Document Extracts. In Proceedings of the National Conference on Artificial Intelligence, pages 1069–1074. Piji Li, Lidong Bing, Wai Lam, Hang Li, and Yi Liao. 2015. Reader-Aware Multi-document Summarization via Sparse Coding. In Proceedings of the 24th International Conference on Artificial Intelligence , pages 1270–1276. Chin-Yew Lin, Guihong Cao, Jianfeng Gao, and JianYun Nie. 2006. An Information-Theoretic Approach to Automatic Evaluation of Summaries. In Proceedings of the Human Language Technology Conference at NAACL, pages 463–470, New York City, USA. Chin-Yew Lin and Eduard Hovy. 2003. Automatic Evaluation of Summaries Using N-gram Cooccurrence Statistics. In Proceedings of the 2003 Conference of the North American Chapter of the Association for Computational Linguistics on Human Language Technology, volume 1, pages 71–78. Hui Lin and Jeff A. Bilmes. 2011. A Class of Submodular Functions for Document Summarization. In Proceedings of the 49th Annual Meeting of the Association for Computational Linguistics (ACL), pages 510–520, Portland, Oregon. He Liu, Hongliang Yu, and Zhi-Hong Deng. 2015. Multi-document Summarization Based on Two-level Sparse Representation Model. In Proceedings of the Twenty-Ninth AAAI Conference on Artificial Intelligence, pages 196–202. 1069 Annie Louis. 2014. A Bayesian Method to Incorporate Background Knowledge during Automatic Text Summarization. In Proceedings of the 52nd Annual Meeting of the Association for Computational Linguistics (Volume 2: Short Papers), pages 333–338, Baltimore, Maryland. Annie Louis and Ani Nenkova. 2013. Automatically Assessing Machine Summary Content Without a Gold Standard. Computational Linguistics, 39(2):267–300. Hans Peter Luhn. 1958. The Automatic Creation of Literature Abstracts. IBM Journal of Research Development, 2:159–165. Shulei Ma, Zhi-Hong Deng, and Yunlun Yang. 2016. An Unsupervised Multi-Document Summarization Framework Based on Neural Document Model. In Proceedings of COLING 2016, the 26th International Conference on Computational Linguistics: Technical Papers, pages 1514–1523. The COLING 2016 Organizing Committee. Inderjeet Mani. 1999. Advances in Automatic Text Summarization. MIT Press, Cambridge, MA, USA. Inderjeet Mani and Eric Bloedorn. 1997. Multidocument Summarization by Graph Search and Matching. In Proceedings of the Fourteenth National Conference on Artificial Intelligence and Ninth Conference on Innovative Applications of Artificial Intelligence, pages 622–628, Providence, Rhode Island. AAAI Press. Ryan McDonald. 2007. A Study of Global Inference Algorithms in Multi-document Summarization. In Proceedings of the 29th European Conference on Information Retrieval Research, pages 557–564. Kathleen R. McKeown, Judith L. Klavans, Vasileios Hatzivassiloglou, Regina Barzilay, and Eleazar Eskin. 1999. Towards Multidocument Summarization by Reformulation: Progress and Prospects. In Proceedings of the Sixteenth National Conference on Artificial Intelligence and the Eleventh Innovative Applications of Artificial Intelligence Conference Innovative Applications of Artificial Intelligence, pages 453–460. Rada Mihalcea and Paul Tarau. 2004. Textrank: Bringing order into text. In Proceedings of the 2004 conference on empirical methods in natural language processing. Tomas Mikolov, Kai Chen, Greg Corrado, and Jeffrey Dean. 2013a. Efficient Estimation of Word Representations in Vector Space. CoRR, abs/1301.3781. Tomas Mikolov, Ilya Sutskever, Kai Chen, Greg S. Corrado, and Jeff Dean. 2013b. Distributed representations of words and phrases and their compositionality. In Advances in Neural Information Processing Systems, pages 3111–3119, Lake Tahoe, Nevada, USA. Richard Montague. 1970. English as a formal language. In Bruno Visentini, editor, Linguaggi nella societa e nella tecnica, pages 188–221. Edizioni di Communita. Ani Nenkova and Kathleen McKeown. 2012. A Survey of Text Summarization Techniques. Mining Text Data, pages 43–76. Ani Nenkova, Rebecca Passonneau, and Kathleen McKeown. 2007. The Pyramid Method: Incorporating Human Content Selection Variation in Summarization Evaluation. ACM Transactions on Speech and Language Processing (TSLP), 4(2). Ani Nenkova, Lucy Vanderwende, and Kathleen McKeown. 2006. A Compositional Context Sensitive Multi-document Summarizer: Exploring the Factors That Influence Summarization. In Proceedings of the 29th Annual International ACM SIGIR Conference on Research and Development in Information Retrieval, SIGIR ’06, pages 573–580. Maxime Peyrard and Judith Eckle-Kohler. 2016. A General Optimization Framework for MultiDocument Summarization Using Genetic Algorithms and Swarm Intelligence. In Proceedings of the 26th International Conference on Computational Linguistics (COLING), pages 247 – 257. Maxime Peyrard and Judith Eckle-Kohler. 2017a. A principled framework for evaluating summarizers: Comparing models of summary quality against human judgments. In Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (ACL 2017), volume Volume 2: Short Papers, pages 26–31. Association for Computational Linguistics. Maxime Peyrard and Judith Eckle-Kohler. 2017b. Supervised learning of automatic pyramid for optimization-based multi-document summarization. In Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (ACL 2017), volume Volume 1: Long Papers, pages 1084– 1094. Association for Computational Linguistics. Maxime Peyrard and Iryna Gurevych. 2018. Objective function learning to match human judgements for optimization-based summarization. In Proceedings of the 16th Annual Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 654–660. Association for Computational Linguistics. Dragomir R. Radev, Hongyan Jing, and Malgorzata Budzikowska. 2000. Centroid-based Summarization of Multiple Documents: Sentence Extraction, Utility-based Evaluation, and User Studies. In Proceedings of the NAACL-ANLP Workshop on Automatic Summarization, volume 4, pages 21–30, Seattle, Washington. 1070 Claude E. Shannon. 1948. A Mathematical Theory of Communication. Bell Systems Technical Journal, 27:623–656. Ruben Sipos, Pannaga Shivaswamy, and Thorsten Joachims. 2012. Large-margin Learning of Submodular Summarization Models. In Proceedings of the 13th Conference of the European Chapter of the Association for Computational Linguistics, pages 224–233, Avignon, France. Association for Computational Linguistics. Karen Sparck Jones. 1972. A Statistical Interpretation of Term Specificity and its Application in Retrieval. Journal of documentation, 28(1):11–21. Victor Yakovlevich Tsvetkov. 2014. The KE Shannon and L. Floridi’s Amount of Information. Life Science Journal, 11(11):667–671. Joseph Turian, Lev Ratinov, and Yoshua Bengio. 2010. Word Representations: A Simple and General Method for Semi-supervised Learning. In Proceedings of the 48th Annual Meeting of the Association for Computational Linguistics, pages 384–394. Peter D Turney and Patrick Pantel. 2010. From Frequency to Meaning: Vector Space Models of Semantics. Journal of artificial intelligence research, 37:141–188. Lucy Vanderwende, Hisami Suzuki, Chris Brockett, and Ani Nenkova. 2007. Beyond SumBasic: Taskfocused Summarization with Sentence Simplification and Lexical Expansion. Information Processing & Management, 43(6):1606–1618. Xiaojun Wan and Jianwu Yang. 2006. Improved Affinity Graph Based Multi-Document Summarization. In Proceedings of the Human Language Technology Conference of the NAACL, Companion Volume: Short Papers, pages 181–184. Association for Computational Linguistics. Dingding Wang, Shenghuo Zhu, Tao Li, and Yihong Gong. 2009. Multi-document Summarization Using Sentence-based Topic Models. In Proceedings of the ACL-IJCNLP 2009, pages 297–300. Association for Computational Linguistics. Warren Weaver. 1953. Recent Contributions to the Mathematical Theory of Communication. ETC: A Review of General Semantics, pages 261–281. Deirdre Wilson and Dan Sperber. 2008. Relevance Theory, chapter 27. John Wiley and Sons, Ltd. Jin-ge Yao, Xiaojun Wan, and Jianguo Xiao. 2017. Recent Advances in Document Summarization. Knowledge and Information Systems, 53(2):297– 336. Dani Yogatama, Fei Liu, and Noah A. Smith. 2015. Extractive Summarization by Maximizing Semantic Volume. In Proceedings of the 2015 Conference on Empirical Methods in Natural Language Processing, pages 1961–1966, Lisbon, Portugal. Yang Zhang, Yunqing Xia, Yi Liu, and Wenmin Wang. 2015. Clustering Sentences with Density Peaks for Multi-document Summarization. In Proceedings of the 2015 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 1262– 1267, Denver, Colorado. Association for Computational Linguistics. Yixin Zhong. 2017. A Theory of Semantic Information. In Proceedings of the IS4SI 2017 Summit Digitalisation for a Sustainable Society, 129. Markus Zopf, Eneldo Loza Menc´ıa, and Johannes F¨urnkranz. 2016. Beyond Centrality and Structural Features: Learning Information Importance for Text Summarization. In Proceedings of the 20th SIGNLL Conference on Computational Natural Language Learning (CoNLL 2016), pages 84–94. 1071 A Details about Baseline Scoring Functions In the paper, we compare the summary scoring function θI against the summary scoring functions derived from several summarizers following the methodology from Peyrard and Eckle-Kohler (2017a). Here, we give explicit formulation of the baseline scoring functions. Edmundson: (Edmundson, 1969) Edmundson (1969) presented a heuristic which scores sentences according to 4 different features: • Cue-phrases: It is based on the hypothesis that the probable relevance of a sentence is affected by the presence of certain cue words such as ’significant’ or ’important’. Bonus words have positive weights, stigma words have negative weights and all the others have no weight. The final score of the sentence is the sum of the weights of its words. • Key: High-frequency content words are believed to be positively correlated with relevance (Luhn, 1958). Each word receives a weight based on its frequency in the document if it is not a stopword. The score of the sentence is also the sum of the weights of its words. • Title: It measures the overlap between the sentence and the title. • Location: It relies on the assumption that sentences appearing early or late in the source documents are more relevant. By combining these scores with a linear combination, we can recognize the objective function: θEdm.(S) = X s∈S α1 · C(s) + α2 · K(s) (14) + α3 · T(s) + α4 · L(s) (15) The sum runs over sentences and C, K, T and L output the sentence scores for each method (Cue, Key, Title and Location). ICSI: (Gillick and Favre, 2009) A global linear optimization that extracts a summary by solving a maximum coverage problem of the most frequent bigrams in the source documents. ICSI has been among the best systems in a classical ROUGE evaluation (Hong et al., 2014). Here, the identification of the scoring function is trivial because it was originally formulated as an optimization task. If ci is the i-th bigram selected in the summary and wi is its weight computed from D, then: θICSI(S) = X ci∈S ci · wi (16) LexRank: (Erkan and Radev, 2004) This is a well-known graph-based approach. A similarity graph G(V, E) is constructed where V is the set of sentences and an edge eij is drawn between sentences vi and vj if and only if the cosine similarity between them is above a given threshold. Sentences are scored according to their PageRank score in G. Thus, θLexRank is given by: θLexRank(S) = X s∈S PRG(s) (17) Here, PR is the PageRank score of sentence s. KL-Greedy: (Haghighi and Vanderwende, 2009) In this approach, the summary should minimize the Kullback-Leibler (KL) divergence between the word distribution of the summary S and the word distribution of the documents D (i.e., θKL = −KL): θKL(S) = −KL(S||D) (18) = − X g∈S PS(g) log PS(g) PD(g) (19) PX(w) represents the frequency of the word (or n-gram) w in the text X. The minus sign indicates that KL should be lower for better summaries. Indeed, we expect a good system summary to exhibit a similar probability distribution of n-grams as the sources. Alternatively, the Jensen-Shannon (JS) divergence can be used instead of KL. Let M be the average word frequency distribution of the candidate summary S and the source documents D distribution: ∀g ∈S, PM(g) = 1 2(PS(g) + PD(g)) (20) Then, the formula for JS is given by: θJS(S) = −JS(S||D) (21) = 1 2 (KL(S||M) + KL(D||M)) (22) 1072 Within our framework, the KL divergence acts as the unification of Relevance and Redundancy when semantic units are bigrams. B Proof of Theorem 1 Let Ωbe the set of semantic units. The notation ωi represents one unit. Let PT , and PK be the text representations of the source documents and background knowledge as probability distributions over semantic units. We note ti = PT (ωi), the probability of the unit ωi in the source T. Similarly, we note ki = PK(ωi). We seek a function f unifying T and K such that: f(ωi) = f(ti, ki). We remind the simple requirements that f should satisfy: • Informativeness: ∀i ̸= j, if ti = tj and ki > kj then f(ti, ki) < f(tj, kj) • Relevance: ∀i ̸= j, if ti > tj and ki = kj then f(ti, ki) > f(tj, kj) • Additivity: I(f(ti, ki)) ≡αI(ti)+βI(ki) (I is the information measure from Shannon’s theory (Shannon, 1948)) • Normalization: P i f(ti, ki) = 1 Theorem 1 states that the functions satisfying the previous requirements are: P T K (ωi) = 1 C · tα i kβ i C = X i tα i kβ i , α, β ∈R+ (23) with C the normalizing constant. Proof. The information function defined by Shannon (1948) is the logarithm: I = log. Then, the Additivity criterion can be written: log(f(ti, ki)) = α log(ti) + β log(ki) + A (24) with A a constant independent of ti and ki Since log is monotonous and increasing, the Informativeness and Additivity criteria can be combined: ∀i ̸= j, if ti = tj and ki > kj then: log f(ti, ki) < log f(tj, kj) α log(ti) + β log(ki) < α log(tj) + β log(kj) β log(ki) < β log(kj) But ki > kj, therefore: β < 0 For clarity, we can now use −β with β ∈R+. Similarly, we can combine the Relevance and Additivity criteria: ∀i ̸= j, if ti > tj and ki = kj then: log f(ti, ki) > log f(tj, kj) α log(ti) + β log(ki) > α log(tj) + β log(kj) α log(ti) > α log(tj) But ti > tj, therefore: α > 0 Then, we have the following form from the Additivity criterion: log f(ti, ki) = α log(ti) −β log(ki) + A f(ti, ki) = eAe[α log(ti)−β log(ki)] f(ti, ki) = eA tα i kβ i x Finally, the Normalization constraint specifies the constant eA: C = 1 eA and C = X i tα i kβ i then: A = −log( X i tα i kβ i ) C Example As an example, for one selected topic of TAC2008 update track, we computed the P D K and compare it to the distribution of the 4 reference summaries. We report the two distributions together in figure 2. For visibility, only the top 50 words according to P D K are considered. However, we observe 1073 Figure 2: Example of P D K in comparison to the word distribution of reference summaries for one topic of TAC-2008 (D0803). a good match between the distribution of the reference summaries and the ideal distribution as defined by P D K . Furthermore, the most desired words according to P D K make sense. This can be seen by looking at one of the human-written reference summary of this topic: Reference summary for topic D0803 China sacrificed coal mine safety in its massive demand for energy. Gas explosions, flooding, fires, and cave-ins cause most accidents. The mining industry is riddled with corruption from mining officials to owners. Officials are often illegally invested in mines and ignore safety procedures for production. South Africa recently provided China with information on mining safety and technology during a conference. China is beginning enforcement of safety regulations. Over 12,000 mines have been ordered to suspend operations and 4,000 others ordered closed. This year 4,228 miners were killed in 2,337 coal mine accidents. China’s mines are the most dangerous worldwide.
2019
101
Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 1074–1084 Florence, Italy, July 28 - August 2, 2019. c⃝2019 Association for Computational Linguistics 1074 Multi-News: a Large-Scale Multi-Document Summarization Dataset and Abstractive Hierarchical Model Alexander R. Fabbri Irene Li Tianwei She Suyi Li Dragomir R. Radev Department of Computer Science Yale University {alexander.fabbri,irene.li,tianwei.she,suyi.li,dragomir.radev}@yale.edu Abstract Automatic generation of summaries from multiple news articles is a valuable tool as the number of online publications grows rapidly. Single document summarization (SDS) systems have benefited from advances in neural encoder-decoder model thanks to the availability of large datasets. However, multidocument summarization (MDS) of news articles has been limited to datasets of a couple of hundred examples. In this paper, we introduce Multi-News, the first large-scale MDS news dataset. Additionally, we propose an end-to-end model which incorporates a traditional extractive summarization model with a standard SDS model and achieves competitive results on MDS datasets. We benchmark several methods on Multi-News and release our data and code in hope that this work will promote advances in summarization in the multidocument setting1. 1 Introduction Summarization is a central problem in Natural Language Processing with increasing applications as the desire to receive content in a concise and easily-understood format increases. Recent advances in neural methods for text summarization have largely been applied in the setting of single-document news summarization and headline generation (Rush et al., 2015; See et al., 2017; Gehrmann et al., 2018). These works take advantage of large datasets such as the Gigaword Corpus (Napoles et al., 2012), the CNN/Daily Mail (CNNDM) dataset (Hermann et al., 2015), the New York Times dataset (NYT, 2008) and the Newsroom corpus (Grusky et al., 2018), which contain on the order of hundreds of thousands to millions of article-summary pairs. However, multidocument summarization (MDS), which aims to 1https://github.com/Alex-Fabbri/ Multi-News Source 1 Meng Wanzhou, Huawei’s chief financial officer and deputy chair, was arrested in Vancouver on 1 December. Details of the arrest have not been released... Source 2 A Chinese foreign ministry spokesman said on Thursday that Beijing had separately called on the US and Canada to “clarify the reasons for the detention ”immediately and “immediately release the detained person ”. The spokesman... Source 3 Canadian officials have arrested Meng Wanzhou, the chief financial officer and deputy chair of the board for the Chinese tech giant Huawei,...Meng was arrested in Vancouver on Saturday and is being sought for extradition by the United States. A bail hearing has been set for Friday... Summary ...Canadian authorities say she was being sought for extradition to the US, where the company is being investigated for possible violation of sanctions against Iran. Canada’s justice department said Meng was arrested in Vancouver on Dec. 1... China’s embassy in Ottawa released a statement.. “The Chinese side has lodged stern representations with the US and Canadian side, and urged them to immediately correct the wrongdoing ”and restore Meng’s freedom, the statement said... Table 1: An example from our multi-document summarization dataset showing the input documents and their summary. Content found in the summary is colorcoded. output summaries from document clusters on the same topic, has largely been performed on datasets with less than 100 document clusters such as the DUC 2004 (Paul and James, 2004) and TAC 2011 (Owczarzak and Dang, 2011) datasets, and has benefited less from advances in deep learning methods. Multi-document summarization of news events offers the challenge of outputting a well-organized summary which covers an event comprehensively while simultaneously avoiding redundancy. The input documents may differ in focus and point of view for an event. We present an example of multiple input news documents and their summary in 1075 Figure 1. The three source documents discuss the same event and contain overlaps in content: the fact that Meng Wanzhou was arrested is stated explicitly in Source 1 and 3 and indirectly in Source 2. However, some sources contain information not mentioned in the others which should be included in the summary: Source 3 states that (Wanzhou) is being sought for extradition by the US while only Source 2 mentioned the attitude of the Chinese side. Recent work in tackling this problem with neural models has attempted to exploit the graph structure among discourse relations in text clusters (Yasunaga et al., 2017) or through an auxiliary text classification task (Cao et al., 2017). Additionally, a couple of recent papers have attempted to adapt neural encoder decoder models trained on single document summarization datasets to MDS (Lebanoff et al., 2018; Baumel et al., 2018; Zhang et al., 2018b). However, data sparsity has largely been the bottleneck of the development of neural MDS systems. The creation of large-scale multi-document summarization dataset for training has been restricted due to the sparsity and cost of humanwritten summaries. Liu et al. (2018) trains abstractive sequence-to-sequence models on a large corpus of Wikipedia text with citations and search engine results as input documents. However, no analogous dataset exists in the news domain. To bridge the gap, we introduce Multi-News, the first large-scale MDS news dataset, which contains 56,216 articles-summary pairs. We also propose a hierarchical model for neural abstractive multi-document summarization, which consists of a pointer-generator network (See et al., 2017) and an additional Maximal Marginal Relevance (MMR) (Carbonell and Goldstein, 1998) module that calculates sentence ranking scores based on relevancy and redundancy. We integrate sentence-level MMR scores into the pointergenerator model to adapt the attention weights on a word-level. Our model performs competitively on both our Multi-News dataset and the DUC 2004 dataset on ROUGE scores. We additionally perform human evaluation on several system outputs. Our contributions are as follows: We introduce the first large-scale multi-document summarization datasets in the news domain. We propose an end-to-end method to incorporate MMR into pointer-generator networks. Finally, we benchmark various methods on our dataset to lay the foundations for future work on large-scale MDS. 2 Related Work Traditional non-neural approaches to multidocument summarization have been both extractive (Carbonell and Goldstein, 1998; Radev et al., 2000; Erkan and Radev, 2004; Mihalcea and Tarau, 2004; Haghighi and Vanderwende, 2009) as well as abstractive (McKeown and Radev, 1995; Radev and McKeown, 1998; Barzilay et al., 1999; Ganesan et al., 2010). Recently, neural methods have shown great promise in text summarization, although largely in the single-document setting, with both extractive (Nallapati et al., 2016a; Cheng and Lapata, 2016; Narayan et al., 2018b) and abstractive methods (Chopra et al., 2016; Nallapati et al., 2016b; See et al., 2017; Paulus et al., 2017; Cohan et al., 2018; C¸ elikyilmaz et al., 2018; Gehrmann et al., 2018) In addition to the multi-document methods described above which address data sparsity, recent work has attempted unsupervised and weakly supervised methods in non-news domains (Chu and Liu, 2018; Angelidis and Lapata, 2018). The methods most related to this work are SDS adapted for MDS data. Zhang et al. (2018a) adopts a hierarchical encoding framework trained on SDS data to MDS data by adding an additional document-level encoding. Baumel et al. (2018) incorporates query relevance into standard sequence-to-sequence models. Lebanoff et al. (2018) adapts encoder-decoder models trained on single-document datasets to the MDS case by introducing an external MMR module which does not require training on the MDS dataset. In our work, we incorporate the MMR module directly into our model, learning weights for the similarity functions simultaneously with the rest of the model. 3 Multi-News Dataset Our dataset, which we call Multi-News, consists of news articles and human-written summaries of these articles from the site newser.com. Each summary is professionally written by editors and includes links to the original articles cited. We will release stable Wayback-archived links, and scripts to reproduce the dataset from these links. Our dataset is notably the first large-scale dataset for MDS on news articles. Our dataset also comes 1076 # of source Frequency # of source Frequency 2 23,894 7 382 3 12,707 8 209 4 5,022 9 89 5 1,873 10 33 6 763 Table 2: The number of source articles per example, by frequency, in our dataset. from a diverse set of news sources; over 1,500 sites appear as source documents 5 times or greater, as opposed to previous news datasets (DUC comes from 2 sources, CNNDM comes from CNN and Daily Mail respectively, and even the Newsroom dataset (Grusky et al., 2018) covers only 38 news sources). A total of 20 editors contribute to 85% of the total summaries on newser.com. Thus we believe that this dataset allows for the summarization of diverse source documents and summaries. 3.1 Statistics and Analysis The number of collected Wayback links for summaries and their corresponding cited articles totals over 250,000. We only include examples with between 2 and 10 source documents per summary, as our goal is MDS, and the number of examples with more than 10 sources was minimal. The number of source articles per summary present, after downloading and processing the text to obtain the original article text, varies across the dataset, as shown in Table 2. We believe this setting reflects real-world situations; often for a new or specialized event there may be only a few news articles. Nonetheless, we would like to summarize these events in addition to others with greater news coverage. We split our dataset into training (80%, 44,972), validation (10%, 5,622), and test (10%, 5,622) sets. Table 3 compares Multi-News to other news datasets used in experiments below. We choose to compare Multi-News with DUC data from 2003 and 2004 and TAC 2011 data, which are typically used in multi-document settings. Additionally, we compare to the single-document CNNDM dataset, as this has been recently used in work which adapts SDS to MDS (Lebanoff et al., 2018). The number of examples in our Multi-News dataset is two orders of magnitude larger than previous MDS news data. The total number of words in the concatenated inputs is shorter than other MDS datasets, as those consist of 10 input documents, but larger than SDS datasets, as expected. Our summaries are notably longer than in other works, about 260 words on average. While compressing information into a shorter text is the goal of summarization, our dataset tests the ability of abstractive models to generate fluent text concise in meaning while also coherent in the entirety of its generally longer output, which we consider an interesting challenge. 3.2 Diversity We report the percentage of n-grams in the gold summaries which do not appear in the input documents as a measure of how abstractive our summaries are in Table 4. As the table shows, the smaller MDS datasets tend to be more abstractive, but Multi-News is comparable and similar to the abstractiveness of SDS datasets. Grusky et al. (2018) additionally define three measures of the extractive nature of a dataset, which we use here for a comparison. We extend these notions to the multi-document setting by concatenating the source documents and treating them as a single input. Extractive fragment coverage is the percentage of words in the summary that are from the source article, measuring the extent to which a summary is derivative of a text: COVERAGE(A,S) = 1 |S| X f∈F(A,S) |f| (1) where A is the article, S the summary, and F(A, S) the set of all token sequences identified as extractive in a greedy manner; if there is a sequence of source tokens that is a prefix of the remainder of the summary, that is marked as extractive. Similarly, density is defined as the average length of the extractive fragment to which each summary word belongs: DENSITY(A,S) = 1 |S| X f∈F(A,S) |f|2 (2) Finally, compression ratio is defined as the word ratio between the articles and its summaries: COMPRESSION(A,S) = |A| |S| (3) These numbers are plotted using kernel density estimation in Figure 1. As explained above, our summaries are larger on average, which corresponds to a lower compression rate. The variability along the x-axis (fragment coverage), suggests 1077 Dataset # pairs # words (doc) # sents (docs) # words (summary) # sents (summary) vocab size Multi-News 44,972/5,622/5,622 2,103.49 82.73 263.66 9.97 666,515 DUC03+04 320 4,636.24 173.15 109.58 2.88 19,734 TAC 2011 176 4,695.70 188.43 99.70 1.00 24,672 CNNDM 287,227/13,368/11,490 810.57 39.78 56.20 3.68 717,951 Table 3: Comparison of our Multi-News dataset to other MDS datasets as well as an SDS dataset used as training data for MDS (CNNDM). Training, validation and testing size splits (article(s) to summary) are provided when applicable. Statistics for multi-document inputs are calculated on the concatenation of all input sources. % novel n-grams Multi-News DUC03+04 TAC11 CNNDM uni-grams 17.76 27.74 16.65 19.50 bi-grams 57.10 72.87 61.18 56.88 tri-grams 75.71 90.61 83.34 74.41 4-grams 82.30 96.18 92.04 82.83 Table 4: Percentage of n-grams in summaries which do not appear in the input documents , a measure of the abstractiveness, in relevant datasets. 1 3 5 7 DUC 03+04 n = 320 c = 39.88 TAC 2011 n = 176 c = 45.00 0.5 0.7 0.9 1 3 5 7 CNN / Daily Mail n = 287,227 c = 12.92 0.5 0.7 0.9 Multi-News n = 44,972 c = 6.34 Extractive fragment coverage Extractive fragment density Figure 1: Density estimation of extractive diversity scores as explained in Section 3.2. Large variability along the y-axis suggests variation in the average length of source sequences present in the summary, while the x axis shows variability in the average length of the extractive fragments to which summary words belong. variability in the percentage of copied words, with the DUC data varying the most. In terms of y-axis (fragment density), our dataset shows variability in the average length of copied sequence, suggesting varying styles of word sequence arrangement. Our dataset exhibits extractive characteristics similar to the CNNDM dataset. 3.3 Other Datasets As discussed above, large scale datasets for multidocument news summarization are lacking. There have been several attempts to create MDS datasets in other domains. Zopf (2018) introduce a multilingual MDS dataset based on English and German Wikipedia articles as summaries to create a set of about 7,000 examples. Liu et al. (2018) use Wikipedia as well, creating a dataset of over two million examples. That paper uses Wikipedia references as input documents but largely relies on Google search to increase topic coverage. We, however, are focused on the news domain, and the source articles in our dataset are specifically cited by the corresponding summaries. Related work has also focused on opinion summarization in the multi-document setting; Angelidis and Lapata (2018) introduces a dataset of 600 Amazon product reviews. 4 Preliminaries We introduce several common methods for summarization. 4.1 Pointer-generator Network The pointer-generator network (See et al., 2017) is a commonly-used encoder-decoder summarization model with attention (Bahdanau et al., 2014) which combines copying words from source documents and outputting words from a vocabulary. The encoder converts each token wi in the document into the hidden state hi. At each decoding step t, the decoder has a hidden state dt. An attention distribution at is calculated as in (Bahdanau et al., 2014) and is used to get the context vector h∗ t , which is a weighted sum of the encoder hidden states, representing the semantic meaning of the related document content for this decoding time step: et i = vT tanh(Whhi + Wsdt + battn) at = softmax(et) h∗ t = X i at iht i (4) The context vector h∗ t and the decoder hidden state dt are then passed to two linear layers to produce the vocabulary distribution Pvocab. For each word, there is also a copy probability Pcopy. It is the sum 1078 of the attention weights over all the word occurrences: Pvocab = softmax(V ′(V [dt, h∗ t ] + b) + b ′) Pcopy = X i:wi=w at i (5) The pointer-generator network has a soft switch pgen, which indicates whether to generate a word from vocabulary by sampling from Pvocab, or to copy a word from the source sequence by sampling from the copy probability Pcopy. pgen = σ(wT h∗h∗ t + wT d dt + wT x xt + bptr) (6) where xt is the decoder input. The final probability distribution is a weighted sum of the vocabulary distribution and copy probability: P(w) = pgenPvocab(w) + (1 −pgen)Pcopy(w) (7) 4.2 Transformer The Transformer model replaces recurrent layers with self-attention in an encoder-decoder framework and has achieved state-of-the-art results in machine translation (Vaswani et al., 2017) and language modeling (Baevski and Auli, 2019; Dai et al., 2019). The Transformer has also been successfully applied to SDS (Gehrmann et al., 2018). More specifically, for each word during encoding, the multi-head self-attention sub-layer allows the encoder to directly attend to all other words in a sentence in one step. Decoding contains the typical encoder-decoder attention mechanisms as well as self-attention to all previous generated output. The Transformer motivates the elimination of recurrence to allow more direct interaction among words in a sequence. 4.3 MMR Maximal Marginal Relevance (MMR) is an approach for combining query-relevance with information-novelty in the context of summarization (Carbonell and Goldstein, 1998). MMR produces a ranked list of the candidate sentences based on the relevance and redundancy to the query, which can be used to extract sentences. The score is calculated as follows: (8) MMR = argmax Di∈R\S  λSim1(Di, Q) −(1 −λ) max Dj∈S Sim2(Di, Dj)  Figure 2: Our Hierarchical MMR-Attention Pointergenerator (Hi-MAP) model incorporates sentence-level representations and hidden-state-based MMR on top of a standard pointer-generator network. where R is the collection of all candidate sentences, Q is the query, S is the set of sentences that have been selected, and R \ S is set of the un-selected ones. In general, each time we want to select a sentence, we have a ranking score for all the candidates that considers relevance and redundancy. A recent work (Lebanoff et al., 2018) applied MMR for multi-document summarization by creating an external module and a supervised regression model for sentence importance. Our proposed method, however, incorporates MMR with the pointer-generator network in an end-toend manner that learns parameters for similarity and redundancy. 5 Hi-MAP Model In this section, we provide the details of our Hierarchical MMR-Attention Pointer-generator (HiMAP) model for multi-document neural abstractive summarization. We expand the existing pointer-generator network model into a hierarchical network, which allows us to calculate sentence-level MMR scores. Our model consists of a pointer-generator network and an integrated MMR module, as shown in Figure 2. 5.1 Sentence representations To expand our model into a hierarchical one, we compute sentence representations on both the encoder and decoder. The input is a collection of sentences D = [s1, s2, .., sn] from all the source documents, where a given sentence si = [wk−m, wk−m+1, ..., wk] is made up of input word tokens. Word tokens from the whole document are treated as a single sequential input to a Bi-LSTM encoder as in the original encoder of the pointer1079 generator network from See et al. (2017) (see bottom of Figure 2). For each time step, the output of an input word token wl is hw l (we use superscript w to indicate word-level LSTM cells, s for sentence-level). To obtain a representation for each sentence si, we take the encoder output of the last token for that sentence. If that token has an index of k in the whole document D, then the sentence representation is marked as hw si = hw k . The wordlevel sentence embeddings of the document hw D = [hw s1, hw s2, ..hw sn] will be a sequence which is fed into a sentence-level LSTM network. Thus, for each input sentence hw si, we obtain an output hidden state hs si. We then get the final sentence-level embeddings hs D = [hs 1, hs 2, ..hs n] (we omit the subscript for sentences s). To obtain a summary representation, we simply treat the current decoded summary as a single sentence and take the output of the last step of the decoder: ssum. We plan to investigate alternative methods for input and output sentence embeddings, such as separate LSTMs for each sentence, in future work. 5.2 MMR-Attention Now, we have all the sentence-level representation from both the articles and summary, and then we apply MMR to compute a ranking on the candidate sentences hs D. Intuitively, incorporating MMR will help determine salient sentences from the input at the current decoding step based on relevancy and redundancy. We follow Section 4.3 to compute MMR scores. Here, however, our query document is represented by the summary vector ssum, and we want to rank the candidates in hs D. The MMR score for an input sentence i is then defined as: (9) MMRi = λSim1(hs i, ssum) −(1 −λ) max sj∈D,j̸=i Sim2(hs i, hs j) We then add a softmax function to normalize all the MMR scores of these candidates as a probability distribution. (10) MMRi = exp(MMRi) P i exp(MMRi) Now we define the similarity function between each candidate sentence hs i and summary sentence ssum to be: Sim1 = hs i T WSimssum (11) where WSim is a learned parameter used to transform ssum and hs i into a common feature space. For the second term of Equation 9, instead of choosing the maximum score from all candidates except for hs i, which is intended to find the candidate most similar to hs i, we choose to apply a self-attention model on hs i and all the other candidates hs j ∈hs D. We then choose the largest weight as the final score: vij = tanh  hs j T Wselfhs i  βij = exp (vij) P j exp (vij) scorei = max j (βi,j) (12) Note that Wself is also a trainable parameter. Eventually, the MMR score from Equation 9 becomes: (13) MMRi = λSim1(hs i, ssum) −(1 −λ)scorei 5.3 MMR-attention Pointer-generator After we calculate MMRi for each sentence representation hs i, we use these scores to update the word-level attention weights for the pointergenerator model shown by the blue arrows in Figure 2. Since MMRi is a sentence weight for hs i, each token in the sentence will have the same value of MMRi. The new attention for each input token from Equation 4 becomes: at = atMMRi (14) 6 Experiments In this section we describe additional methods we compare with and present our assumptions and experimental process. 6.1 Baseline and Extractive Methods First We concatenate the first sentence of each article in a document cluster as the system summary. For our dataset, First-k means the first k sentences from each source article will be concatenated as the summary. Due to the difference in gold summary length, we only use First-1 for DUC, as others would exceed the average summary length. LexRank Initially proposed by (Erkan and Radev, 2004), LexRank is a graph-based method for computing relative importance in extractive summarization. 1080 TextRank Introduced by (Mihalcea and Tarau, 2004), TextRank is a graph-based ranking model. Sentence importance scores are computed based on eigenvector centrality within a global graph from the corpus. MMR In addition to incorporating MMR in our pointer generator network, we use this original method as an extractive summarization baseline. When testing on DUC data, we set these extractive methods to give an output of 100 tokens and 300 tokens for Multi-News data. 6.2 Neural Abstractive Methods PG-Original, PG-MMR These are the original pointer-generator network models reported by (Lebanoff et al., 2018). PG-BRNN The PG-BRNN model is a pointergenerator implementation from OpenNMT2. As in the original paper (See et al., 2017), we use a 1layer bi-LSTM as encoder, with 128-dimensional word-embeddings and 256-dimensional hidden states for each direction. The decoder is a 512dimensional single-layer LSTM. We include this for reference in addition to PG-Original, as our HiMAP code builds upon this implementation. CopyTransformer Instead of using an LSTM, the CopyTransformer model used in Gehrmann et al. (2018) uses a 4-layer Transformer of 512 dimensions for encoder and decoder. One of the attention heads is chosen randomly as the copy distribution. This model and the PG-BRNN are run without the bottom-up masked attention for inference from Gehrmann et al. (2018) as we did not find a large improvement when reproducing the model on this data. 6.3 Experimental Setting Following the setting from (Lebanoff et al., 2018), we report ROUGE (Lin, 2004) scores, which measure the overlap of unigrams (R-1), bigrams (R2) and skip bigrams with a max distance of four words (R-SU). For the neural abstractive models, we truncate input articles to 500 tokens in the following way: for each example with S source input documents, we take the first 500/S tokens from each source document. As some source documents may be shorter, we iteratively determine the number of tokens to take from each document until the 500 token quota is reached. Hav2https://github.com/OpenNMT/ OpenNMT-py/blob/master/docs/source/ Summarization.md ing determined the number of tokens per source document to use, we concatenate the truncated source documents into a single mega-document. This effectively reduces MDS to SDS on longer documents, a commonly-used assumption for recent neural MDS papers (Cao et al., 2017; Liu et al., 2018; Lebanoff et al., 2018). We chose 500 as our truncation size as related MDS work did not find significant improvement when increasing input length from 500 to 1000 tokens (Liu et al., 2018). We simply introduce a special token between source documents to aid our models in detecting document-to-document relationships and leave direct modeling of this relationship, as well as modeling longer input sequences, to future work. We hope that the dataset we introduce will promote such work. For our Hi-MAP model, we applied a 1-layer bidirectional LSTM network, with the hidden state dimension 256 in each direction. The sentence representation dimension is also 256. We set the λ = 0.5 to calculate the MMR value in Equation 9. Method R-1 R-2 R-SU First 30.77 8.27 7.35 LexRank (Erkan and Radev, 2004) 35.56 7.87 11.86 TextRank (Mihalcea and Tarau, 2004) 33.16 6.13 10.16 MMR (Carbonell and Goldstein, 1998) 30.14 4.55 8.16 PG-Original(Lebanoff et al., 2018) 31.43 6.03 10.01 PG-MMR(Lebanoff et al., 2018) 36.42 9.36 13.23 PG-BRNN (Gehrmann et al., 2018) 29.47 6.77 7.56 CopyTransformer (Gehrmann et al., 2018) 28.54 6.38 7.22 Hi-MAP (Our Model) 35.78 8.90 11.43 Table 5: ROUGE scores on the DUC 2004 dataset for models trained on CNNDM data, as in Lebanoff et al. (2018).3 Method R-1 R-2 R-SU First-1 26.83 7.25 6.46 First-2 35.99 10.17 12.06 First-3 39.41 11.77 14.51 LexRank (Erkan and Radev, 2004) 38.27 12.70 13.20 TextRank (Mihalcea and Tarau, 2004) 38.44 13.10 13.50 MMR (Carbonell and Goldstein, 1998) 38.77 11.98 12.91 PG-Original (Lebanoff et al., 2018) 41.85 12.91 16.46 PG-MMR (Lebanoff et al., 2018) 40.55 12.36 15.87 PG-BRNN (Gehrmann et al., 2018) 42.80 14.19 16.75 CopyTransformer (Gehrmann et al., 2018) 43.57 14.03 17.37 Hi-MAP (Our Model) 43.47 14.89 17.41 Table 6: ROUGE scores for models trained and tested on the Multi-News dataset. 3As our focus was on deep methods for MDS, we only tested several non-neural baselines. However, other classical methods deserve more attention, for which we refer the reader to Hong et al. (2014) and leave the implementation of these methods on Multi-News for future work. 1081 Method Informativeness Fluency Non-Redundancy PG-MMR 95 70 45 Hi-MAP 85 75 100 CopyTransformer 99 100 107 Human 150 150 149 Table 7: Number of times a system was chosen as best in pairwise comparisons according to informativeness, fluency and non-redundancy. 7 Analysis and Discussion In Table 5 and Table 6 we report ROUGE scores on DUC 2004 and Multi-News datasets respectively. We use DUC 2004, as results on this dataset are reported in Lebanoff et al. (2018), although this dataset is not the focus of this work. For results on DUC 2004, models were trained on the CNNDM dataset, as in Lebanoff et al. (2018). PGBRNN and CopyTransformer models, which were pretrained by OpenNMT on CNNDM, were applied to DUC without additional training, analogous to PG-Original. We also experimented with training on Multi-News and testing on DUC data, but we did not see significant improvements. We attribute the generally low performance of pointergenerator, CopyTransformer and Hi-MAP to domain differences between DUC and CNNDM as well as DUC and Multi-News. These domain differences are evident in the statistics and extractive metrics discussed in Section 3. Additionally, for both DUC and Multi-News testing, we experimented with using the output of 500 tokens from extractive methods (LexRank, TextRank and MMR) as input to the abstractive model. However, this did not improve results. We believe this is because our truncated input mirrors the First-3 baseline, which outperforms these three extractive methods and thus may provide more information as input to the abstractive model. Our model outperforms PG-MMR when trained and tested on the Multi-News dataset. We see much-improved model performances when trained and tested on in-domain Multi-News data. The Transformer performs best in terms of R-1 while Hi-MAP outperforms it on R-2 and R-SU. Also, we notice a drop in performance between PGoriginal, and PG-MMR (which takes the pretrained PG-original and applies MMR on top of the model). Our PG-MMR results correspond to PG-MMR w Cosine reported in Lebanoff et al. (2018). We trained their sentence regression model on Multi-News data and leave the investigation of transferring regression models from SDS to Multi-News for future work. In addition to automatic evaluation, we performed human evaluation to compare the summaries produced. We used Best-Worst Scaling (Louviere and Woodworth, 1991; Louviere et al., 2015), which has shown to be more reliable than rating scales (Kiritchenko and Mohammad, 2017) and has been used to evaluate summaries (Narayan et al., 2018a; Angelidis and Lapata, 2018). Annotators were presented with the same input that the systems saw at testing time; input documents were truncated, and we separated input documents by visible spaces in our annotator interface. We chose three native English speakers as annotators. They were presented with input documents, and summaries generated by two out of four systems, and were asked to determine which summary was better and which was worse in terms of informativeness (is the meaning in the input text preserved in the summary?), fluency (is the summary written in well-formed and grammatical English?) and non-redundancy (does the summary avoid repeating information?). We randomly selected 50 documents from the Multi-News test set and compared all possible combinations of two out of four systems. We chose to compare PG-MMR, CopyTransformer, Hi-MAP and gold summaries. The order of summaries was randomized per example. The results of our pairwise human-annotated comparison are shown in Table 7. Human-written summaries were easily marked as better than other systems, which, while expected, shows that there is much room for improvement in producing readable, informative summaries. We performed pairwise comparison of the models over the three metrics combined, using a one-way ANOVA with Tukey HSD tests and p value of 0.05. Overall, statistically significant differences were found between human summaries score and all other systems, CopyTransformer and the other two models, and our Hi-MAP model compared to PG-MMR. Our Hi-MAP model performs comparably to PGMMR on informativeness and fluency but much better in terms of non-redundancy. We believe that the incorporation of learned parameters for similarity and redundancy reduces redundancy in our output summaries. In future work, we would like to incorporate MMR into Transformer models to benefit from their fluent summaries. 8 Conclusion In this paper we introduce Multi-News, the first large-scale multi-document news summarization 1082 dataset. We hope that this dataset will promote work in multi-document summarization similar to the progress seen in the single-document case. Additionally, we introduce an end-to-end model which incorporates MMR into a pointer-generator network, which performs competitively compared to previous multi-document summarization models. We also benchmark methods on our dataset. In the future we plan to explore interactions among documents beyond concatenation and experiment with summarizing longer input documents. References 2008. The New York Times Annotated Corpus. Stefanos Angelidis and Mirella Lapata. 2018. Summarizing Opinions: Aspect Extraction Meets Sentiment Prediction and They Are Both Weakly Supervised. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, Brussels, Belgium, October 31 - November 4, 2018, pages 3675–3686. Alexei Baevski and Michael Auli. 2019. Adaptive input representations for neural language modeling. In International Conference on Learning Representations. Dzmitry Bahdanau, Kyunghyun Cho, and Yoshua Bengio. 2014. Neural Machine Translation by Jointly Learning to Align and Translate. arXiv preprint arXiv:1409.0473. Regina Barzilay, Kathleen R. McKeown, and Michael Elhadad. 1999. Information Fusion in the Context of Multi-Document Summarization. In 27th Annual Meeting of the Association for Computational Linguistics, University of Maryland, College Park, Maryland, USA, 20-26 June 1999. Tal Baumel, Matan Eyal, and Michael Elhadad. 2018. Query Focused Abstractive Summarization: Incorporating Query Relevance, Multi-Document Coverage, and Summary Length Constraints into seq2seq Models. CoRR, abs/1801.07704. Ziqiang Cao, Wenjie Li, Sujian Li, and Furu Wei. 2017. Improving Multi-Document Summarization via Text Classification. In Proceedings of the Thirty-First AAAI Conference on Artificial Intelligence, February 4-9, 2017, San Francisco, California, USA., pages 3053–3059. Jaime Carbonell and Jade Goldstein. 1998. The Use of MMR, Diversity-Based Reranking for Reordering Documents and Producing Summaries. In Proceedings of the 21st annual international ACM SIGIR conference on Research and development in information retrieval, pages 335–336. ACM. Asli C¸ elikyilmaz, Antoine Bosselut, Xiaodong He, and Yejin Choi. 2018. Deep Communicating Agents for Abstractive Summarization. In Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, NAACL-HLT 2018, New Orleans, Louisiana, USA, June 1-6, 2018, Volume 1 (Long Papers), pages 1662–1675. Jianpeng Cheng and Mirella Lapata. 2016. Neural Summarization by Extracting Sentences and Words. In Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics, ACL 2016, August 7-12, 2016, Berlin, Germany, Volume 1: Long Papers. Sumit Chopra, Michael Auli, and Alexander M. Rush. 2016. Abstractive Sentence Summarization with Attentive Recurrent Neural Networks. In NAACL HLT 2016, The 2016 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, San Diego California, USA, June 12-17, 2016, pages 93–98. Eric Chu and Peter J. Liu. 2018. Unsupervised Neural Multi-Document Abstractive Summarization. CoRR, abs/1810.05739. Arman Cohan, Franck Dernoncourt, Doo Soon Kim, Trung Bui, Seokhwan Kim, Walter Chang, and Nazli Goharian. 2018. A Discourse-Aware Attention Model for Abstractive Summarization of Long Documents. In Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, NAACL-HLT, New Orleans, Louisiana, USA, June 1-6, 2018, Volume 2 (Short Papers), pages 615–621. Zihang Dai, Zhilin Yang, Yiming Yang, William W. Cohen, Jaime Carbonell, Quoc V. Le, and Ruslan Salakhutdinov. 2019. Transformer-XL: Language modeling with longer-term dependency. G¨unes Erkan and Dragomir R Radev. 2004. Lexrank: Graph-Based Lexical Centrality as Salience in Text Summarization. Journal of artificial intelligence research, 22:457–479. Kavita Ganesan, ChengXiang Zhai, and Jiawei Han. 2010. Opinosis: A Graph Based Approach to Abstractive Summarization of Highly Redundant Opinions. In COLING 2010, 23rd International Conference on Computational Linguistics, Proceedings of the Conference, 23-27 August 2010, Beijing, China, pages 340–348. Sebastian Gehrmann, Yuntian Deng, and Alexander M. Rush. 2018. Bottom-Up Abstractive Summarization. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, Brussels, Belgium, October 31 - November 4, 2018, pages 4098–4109. 1083 Max Grusky, Mor Naaman, and Yoav Artzi. 2018. Newsroom: A Dataset of 1.3 Million Summaries with Diverse Extractive Strategies. CoRR, abs/1804.11283. Aria Haghighi and Lucy Vanderwende. 2009. Exploring Content Models for Multi-Document Summarization. In Human Language Technologies: Conference of the North American Chapter of the Association of Computational Linguistics, Proceedings, May 31 - June 5, 2009, Boulder, Colorado, USA, pages 362–370. Karl Moritz Hermann, Tom´as Kocisk´y, Edward Grefenstette, Lasse Espeholt, Will Kay, Mustafa Suleyman, and Phil Blunsom. 2015. Teaching Machines to Read and Comprehend. In Advances in Neural Information Processing Systems 28: Annual Conference on Neural Information Processing Systems 2015, December 7-12, 2015, Montreal, Quebec, Canada, pages 1693–1701. Kai Hong, John M. Conroy, Benoˆıt Favre, Alex Kulesza, Hui Lin, and Ani Nenkova. 2014. A repository of state of the art and competitive baseline summaries for generic news summarization. In Proceedings of the Ninth International Conference on Language Resources and Evaluation, LREC 2014, Reykjavik, Iceland, May 26-31, 2014., pages 1608–1616. Svetlana Kiritchenko and Saif Mohammad. 2017. Best-Worst Scaling More Reliable than Rating Scales: A Case Study on Sentiment Intensity Annotation. In Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics, ACL 2017, Vancouver, Canada, July 30 - August 4, Volume 2: Short Papers, pages 465–470. Logan Lebanoff, Kaiqiang Song, and Fei Liu. 2018. Adapting the Neural Encoder-Decoder Framework from Single to Multi-Document summarization. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, Brussels, Belgium, October 31 - November 4, 2018, pages 4131–4141. Chin-Yew Lin. 2004. Rouge: A Package for Automatic Evaluation of Summaries. Text Summarization Branches Out. Peter J. Liu, Mohammad Saleh, Etienne Pot, Ben Goodrich, Ryan Sepassi, Lukasz Kaiser, and Noam Shazeer. 2018. Generating Wikipedia by Summarizing Long Sequences. CoRR, abs/1801.10198. Jordan Louviere, Terry Flynn, and A. A. J. Marley. 2015. Best-Worst Scaling: Theory, Methods and Applications. Jordan J Louviere and George G Woodworth. 1991. Best-Worst Scaling: A Model for the Largest Difference Judgments. Kathleen R. McKeown and Dragomir R. Radev. 1995. Generating summaries of multiple news articles. In Proceedings, ACM Conference on Research and Development in Information Retrieval SIGIR’95, pages 74–82, Seattle, Washington. Rada Mihalcea and Paul Tarau. 2004. Textrank: Bringing Order into Text. In Proceedings of the 2004 conference on empirical methods in natural language processing. Ramesh Nallapati, Bowen Zhou, and Mingbo Ma. 2016a. Classify or Select: Neural Architectures for Extractive Document Summarization. CoRR, abs/1611.04244. Ramesh Nallapati, Bowen Zhou, C´ıcero Nogueira dos Santos, C¸ aglar G¨ulc¸ehre, and Bing Xiang. 2016b. Abstractive Text Summarization Using Sequenceto-Sequence RNNs and Beyond. In Proceedings of the 20th SIGNLL Conference on Computational Natural Language Learning, CoNLL 2016, Berlin, Germany, August 11-12, 2016, pages 280–290. Courtney Napoles, Matthew R. Gormley, and Benjamin Van Durme. 2012. Annotated Gigaword. In Proceedings of the Joint Workshop on Automatic Knowledge Base Construction and Web-scale Knowledge Extraction, AKBC-WEKEX@NAACLHLT 2012, Montr`eal, Canada, June 7-8, 2012, pages 95–100. Shashi Narayan, Shay B. Cohen, and Mirella Lapata. 2018a. Don’t Give Me the Details, Just the Summary! Topic-Aware Convolutional Neural Networks for Extreme Summarization. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, pages 1797–1807. Association for Computational Linguistics. Shashi Narayan, Shay B. Cohen, and Mirella Lapata. 2018b. Ranking Sentences for Extractive Summarization with Reinforcement Learning. In Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, NAACLHLT 2018, New Orleans, Louisiana, USA, June 1-6, 2018, Volume 1 (Long Papers), pages 1747–1759. Karolina Owczarzak and Hoa Trang Dang. 2011. Overview of the TAC 2011 Summarization Track: Guided Task and AESOP Task. Over Paul and Yen James. 2004. An Introduction to DUC-2004. In Proceedings of the 4th Document Understanding Conference (DUC 2004). Romain Paulus, Caiming Xiong, and Richard Socher. 2017. A Deep Reinforced Model for Abstractive Summarization. CoRR, abs/1705.04304. Dragomir R. Radev, Hongyan Jing, and Malgorzata Budzikowska. 2000. Centroid-Based Summarization of Multiple Documents: Sentence Extraction utility-based evaluation, and user studies. CoRR, cs.CL/0005020. 1084 Dragomir R. Radev and Kathleen R. McKeown. 1998. Generating Natural Language Summaries from Multiple On-Line Sources. Computational Linguistics, 24(3):469–500. Alexander M. Rush, Sumit Chopra, and Jason Weston. 2015. A Neural Attention Model for Abstractive Sentence Summarization. In Proceedings of the 2015 Conference on Empirical Methods in Natural Language Processing, EMNLP 2015, Lisbon, Portugal, September 17-21, 2015, pages 379–389. Abigail See, Peter J Liu, and Christopher D Manning. 2017. Get To The Point: Summarization with Pointer-Generator Networks. In Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), volume 1, pages 1073–1083. Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, Łukasz Kaiser, and Illia Polosukhin. 2017. Attention is all you need. In Advances in Neural Information Processing Systems, pages 5998–6008. Michihiro Yasunaga, Rui Zhang, Kshitijh Meelu, Ayush Pareek, Krishnan Srinivasan, and Dragomir R. Radev. 2017. Graph-Based Neural Multi-Document Summarization. In Proceedings of CoNLL-2017. Association for Computational Linguistics. Jianmin Zhang, Jiwei Tan, and Xiaojun Wan. 2018a. Adapting Neural Single-Document Summarization Model for Abstractive Multi-Document Summarization: A Pilot Study. In Proceedings of the 11th International Conference on Natural Language Generation, Tilburg University, The Netherlands, November 5-8, 2018, pages 381–390. Jianmin Zhang, Jiwei Tan, and Xiaojun Wan. 2018b. Towards a Neural Network Approach to Abstractive Multi-Document Summarization. CoRR, abs/1804.09010. Markus Zopf. 2018. Auto-hmds: Automatic Construction of a Large Heterogeneous Multilingual MultiDocument Summarization Corpus. In Proceedings of the Eleventh International Conference on Language Resources and Evaluation, LREC 2018, Miyazaki, Japan, May 7-12, 2018.
2019
102
Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 1085–1097 Florence, Italy, July 28 - August 2, 2019. c⃝2019 Association for Computational Linguistics 1085 Generating Natural Language Adversarial Examples through Probability Weighted Word Saliency Shuhuai Ren Yihe Deng Huazhong University of Science and Technology University of California, Los Angeles shuhuai [email protected] [email protected] Kun He∗ Wanxiang Che School of Computer Science and Technology, School of Computer Science and Technology, Huazhong University of Science and Technology Harbin Institute of Technology [email protected] [email protected] Abstract We address the problem of adversarial attacks on text classification, which is rarely studied comparing to attacks on image classification. The challenge of this task is to generate adversarial examples that maintain lexical correctness, grammatical correctness and semantic similarity. Based on the synonyms substitution strategy, we introduce a new word replacement order determined by both the word saliency and the classification probability, and propose a greedy algorithm called probability weighted word saliency (PWWS) for text adversarial attack. Experiments on three popular datasets using convolutional as well as LSTM models show that PWWS reduces the classification accuracy to the most extent, and keeps a very low word substitution rate. A human evaluation study shows that our generated adversarial examples maintain the semantic similarity well and are hard for humans to perceive. Performing adversarial training using our perturbed datasets improves the robustness of the models. At last, our method also exhibits a good transferability on the generated adversarial examples. 1 Introduction Deep neural networks (DNNs) have exhibited vulnerability to adversarial examples primarily for image classification (Szegedy et al., 2013; Goodfellow et al., 2015; Nguyen et al., 2015). Adversarial examples are input data that are artificially modified to cause mistakes in models. For image classifications, the researchers have proposed various methods to add small perturbations on images that are imperceptible to humans but can cause misclassification in DNN classifiers. Due to the variety of key applications of DNNs in computer vision, the security issue raised by adversarial examples has attracted much attention in liter∗Corresponding author. atures since 2014, and numerous approaches have been proposed for either attack (Goodfellow et al., 2015; Kurakin et al., 2016; Tram`er et al., 2018; Dong et al., 2018), as well as defense (Goodfellow et al., 2015; Tram`er et al., 2018; Wong and Kolter, 2018; Song et al., 2019). In the area of Natural Language Processing (NLP), there is only a few lines of works done recently that address adversarial attacks for NLP tasks (Liang et al., 2018; Samanta and Mehta, 2017; Alzantot et al., 2018). This may be due to the difficulty that words in sentences are discrete tokens, while the image space is continuous to perform gradient descent related attacks or defnses. It is also hard in human’s perception to make sense of the texts with perturbations while for images minor changes on pixels still yield a meaningful image for human eyes. Meanwhile, the existence of adversarial examples for NLP tasks, such as span filtering, fake news detection, sentiment analysis, etc., raises concerns on significant security issues in their applications. In this work, we focus on the problem of generating valid adversarial examples for text classification, which could inspire more works for NLP attack and defense. In the area of NLP, as the input feature space is usually the word embedding space, it is hard to map a perturbed vector in the feature space to a valid word in the vocabulary. Thus, methods of generating adversarial examples in the image field can not be directly transferred to NLP attacks. The general approach, then, is to modify the original samples in the word level or in the character level to achieve adversarial attacks (Liang et al., 2018; Gao et al., 2018; Ebrahimi et al., 2018). We focus on the text adversarial example generation that could guarantee the lexical correctness with little grammatical error and semantic shifting. In this way, it achieves “small per1086 turbation” as the changes will be hard for humans to perceive. We introduce a new synonym replacement method called Probability Weighted Word Saliency (PWWS) that considers the word saliency as well as the classification probability. The change value of the classification probability is used to measure the attack effect of the proposed substitute word, while word saliency shows how well the original word affects the classification. The change value of the classification probability weighted by word saliency determines the final substitute word and replacement order. Extensive experiments on three popular datasets using convolutional as well as LSTM models demonstrate a good attack effect of PWWS. It reduces the accuracy of the DNN classifiers by up to 84.03%, outperforms existing text attacking methods. Meanwhile, PWWS has a much lower word substitution rate and exhibits a good transferability. We also do a human evaluation to show that our perturbations are hard for humans to perceive. In the end, we demonstrate that adversarial training using our generated examples can help improve robustness of the text classification models. 2 Related Work We first provide a brief review on related works for attacking text classification models. Liang et al. (2018) propose to find appropriate words for insertion, deletion and replacement by calculating the word frequency and the highest gradient magnitude of the cost function. But their method involves considerable human participation in crafting the adversarial examples. To maintain semantic similarity and avoid human detection, it requires human efforts such as searching related facts online for insertion. Therefore, subsequent research are mainly based on the word substitution strategy so as to avoid artificial fabrications and achieve automatic generations. The key difference of these subsequent methods is on how they generate substitute words. Samanta and Mehta (2017) propose to build a candidate pool that includes synonyms, typos and genre specific keywords. They adopt Fast Gradient Sign Method (FGSM) (Goodfellow et al., 2015) to pick a candidate word for replacement. Papernot et al. (2016b) perturb a word vector by calculating forward derivative (Papernot et al., 2016a) and map the perturbed word vector to a closest word in the word embedding space. Yang et al. (2018) derive two methods, Greedy Attack based on perturbation, and Gumbel Attack based on scalable learning. Aiming to restore the interpretability of adversarial attacks based on word substitution strategy, Sato et al. (2018) restrict the direction of perturbations towards existing words in the input embedding space. As the above methods all need to calculate the gradient with access to the model structure, model parameters, and the feature set of the inputs, they are classified as white-box attacks. To achieve attack under a black-box setting, which assumes no access to the details of the model or the feature representation of the inputs, Alzantot et al. (2018) propose to use a population-based optimization algorithm. Gao et al. (2018) present a DeepWordBug algorithm to generate small perturbations in the character-level for black-box attack. They sort the tokens based on the importance evaluated by four functions, and make random token transformations such as substitution and deletion with the constraint of edit distance. Ebrahimi et al. (2018) also propose a token transformation method, and it is based on the gradients of the one-hot input vectors. The downside of the character-level perturbations is that they usually lead to lexical errors, which hurts the readability and can easily be perceived by humans. The related works have achieved good results for text adversarial attacks, but there is still much room for improvement regarding the percentage of modifications, attacking success rate, maintenance on lexical as well as grammatical correctness and semantic similarity, etc. Based on the synonyms substitution strategy, we propose a novel blackbox attack method called PWWS for the NLP classification tasks and contribute to the field of adversarial machine learning. 3 Text Classification Attack Given an input feature space X containing all possible input texts (in vector form x) and an output space Y = {y1, y2, . . . , yK} containing K possible labels of x, the classifier F needs to learn a mapping f : X →Y from an input sample x ∈X to a correct label ytrue ∈Y. In the following, we first give a definition of adversarial example for natural language classification, and then introduce our word substitution strategy. 1087 3.1 Text Adversarial Examples Given a trained natural language classifier F, which can correctly classify the original input text x to the label ytrue based on the maximum posterior probability. arg max yi∈Y P(yi|x) = ytrue. (1) We attack the classifier by adding an imperceptible perturbation ∆x to x to craft an adversarial example x∗, for which F is expected to give a wrong label: arg max yi∈Y P(yi|x∗) ̸= ytrue. Eq. (2) gives the definition of the adversarial example x∗: x∗= x + ∆x, ∥∆x∥p < ϵ, arg max yi∈Y P(yi|x∗) ̸= arg max yi∈Y P(yi|x). (2) The original input text can be expressed as x = w1w2 . . . wi . . . wn, where wi ∈D is a word and D is a dictionary of words. ∥∆x∥p defined in Eq. (3) uses p-norm to represent the constraint on perturbation ∆x, and L∞, L2 and L0 are commonly used. ∥∆x∥p = n X i=1 |w∗ i −wi|p ! 1 p . (3) To make the perturbation small enough so that it is imperceptible to humans, the adversarial examples need to satisfy lexical, grammatical, and semantic constraints. Lexical constraint requires that the correct word in the input sample cannot be changed to a common misspelled word, as a spell check before the input of the classifier can easily remove such perturbation. The perturbed samples, moreover, must be grammatically correct. Third, the modification on the original samples should not lead to significant changes in semantics as the semantic constraint requires. To meet the above constraints, we replace words in the input texts with synonyms and replace named entities (NEs) with similar NEs to generate adversarial samples. Synonyms for each word can be found in WordNet1, a large lexical database for the English language. NE refers to an entity that has a specific meaning in the sample text, such as a person’s name, a location, an organization, or a proper noun. Replacement of an NE with a similar NE imposes a slight change in semantics but invokes no lexical or grammatical changes. The candidate NE for replacement is picked in 1https://wordnet.princeton.edu/ the following. Assuming that the current input sample belongs to the class ytrue and dictionary Dytrue ⊆D contains all NEs that appear in the texts with class ytrue, we can use the most frequently occurring named entity NEadv in the complement dictionary D−Dytrue as a substitute word. In addition, the substitute NEadv must have the consistent type with the original NE, e.g., they must be both locations. 3.2 Word Substitution by PWWS In this work, we propose a new text attacking method called Probability Weighted Word Saliency (PWWS). Our approach is based on synonym replacement, and there are two key issues that we resolve in the greedy PWWS algorithm: the selection of synonyms or NEs and the decision of the replacement order. 3.2.1 Word Substitution Strategy For each word wi in x, we use WordNet to build a synonym set Li ⊆D that contains all synonyms of wi. If wi is an NE, we find NEadv which has a consistent type of wi to join Li. Then, every w′ i ∈Li is a candidate word for substitution of the original wi. We select a w′ i from Li as the proposed substitute word w∗ i if it causes the most significant change in the classification probability after replacement. The substitute word selection method R(wi, Li) is defined as follows: w∗ i = R(wi, Li) = arg max w′ i∈Li  P(ytrue|x) −P(ytrue|x′ i) , (4) where x = w1w2 . . . wi . . . wn, x′ i = w1w2 . . . w′ i . . . wn, and x′ i is the text obtained by replacing wi with each candidate word w′ i ∈Li. Then we replace wi with w∗ i and get a new text x∗ i : x∗ i = w1w2 . . . w∗ i . . . wn. The change in classification probability between x and x∗ i represents the best attack effect that can be achieved after replacing wi. ∆P ∗ i = P(ytrue|x) −P(ytrue|x∗ i ). (5) For each word wi ∈x, we find the corresponding substitute word w∗ i by Eq. (4), which solves the first key issue in PWWS. 1088 3.2.2 Replacement Order Strategy Furthermore, in the text classification tasks, each word in the input sample may have different level of impact on the final classification. Thus, we incorporate word saliency (Li et al., 2016b,a) into our algorithm to determine the replacement order. Word saliency refers to the degree of change in the output probability of the classifier if a word is set to unknown (out of vocabulary). The saliency of a word is computed as S(x, wi). S(x, wi) = P(ytrue|x) −P(ytrue|ˆxi) (6) where x = w1w2 . . . wi . . . wd, ˆxi = w1w2 . . . unknown . . . wd. We calculate the word saliency S(x, wi) for all wi ∈x to obtain a saliency vector S(x) for text x. To determine the priority of words for replacement, we need to consider the degree of change in the classification probability after substitution as well as the word saliency for each word. Thus, we score each proposed substitute word w∗ i by evaluating the ∆P ∗ i in Eq. (5) and ith value of S(x). The score function H(x, x∗ i , wi) is defined as: H(x, x∗ i , wi) = φ(S(x))i · ∆P ∗ i (7) where φ(z)i is the softmax function φ(z)i = ezi PK k=1 ezk . (8) z in Eq. (8) is a vector. zi and φ(z)i indicate the ith component of vector z and φ(z), respectively. φ(S(x)) in Eq. (7) indicates a softmax operation on word saliency vector S(x) and K = |S(x)|. Eq. (7) defined by probability weighted word saliency determines the replacement order. We sort all the words wi in x in descending order based on H(x, x∗ i , wi), then consider each word wi under this order and select the proposed substitute word w∗ i for wi to be replaced. We greedily iterate through the process until enough words have been replaced to make the final classification label change. The final PWWS Algorithm is as shown in Algorithm 1. 4 Empirical Evaluation For empirical evaluation, we compare PWWS with other attacking methods on three popular datasets involving four neural network classification models. Algorithm 1 PWWS Algorithm Input: Sample text x(0) before iteration; Input: Length of sample text x(0): n = |x(0)|; Input: Classifier F; Output: Adversarial example x(i) 1: for all i = 1 to n do 2: Compute word saliency S(x(0), wi) 3: Get a synonym set Li for wi 4: if wi is an NE then Li = Li ∪{NEadv} 5: end if 6: if Li = ∅then continue 7: end if 8: w∗ i = R(wi, Li); 9: end for 10: Reorder wi such that 11: H(x, x∗ 1, w1) > · · · > H(x, x∗ n, wn) 12: for all i = 1 to n do 13: Replace wi in x(i−1) with w∗ i to craft x(i) 14: if F(x(i)) ̸= F(x(0)) then break 15: end if 16: end for 4.1 Datasets Table 1 lists the details of the datasets, IMDB, AG’s News, and Yahoo! Answers. IMDB. IMDB is a large movie review dataset consisting of 25,000 training samples and 25,000 test samples, labeled as positive or negative. We use this dataset to train a word-based CNN model and a Bi-directional LSTM network for sentiment classification (Maas et al., 2011). AG’s News. This is a collection of more than one million news articles, which can be categorized into four classes: World, Sports, Business and Sci/Tech. Each class contains 30,000 training samples and 1,900 testing samples. Yahoo! Answers. This dataset consists of ten topic categories: Society & Culture, Science & Mathematics, Health, Education & Reference, Computers & Internet, etc. Each category contains 140,000 training samples and 5,000 test samples. 4.2 Deep Neural Models For deep neural models, we consider several classic as well as state-of-the-art models used for text classification. These models include both convolutional neural networks (CNN) and recurrent neural networks (RNN), for word-level or characterlevel data processing. 1089 Dataset #Classes #Train samples #Test samples #Average words Task IMDB Review 2 25,000 25,000 325.6 Sentiment analysis AG’s News 4 120,000 7600 278.6 News categorization Yahoo! Answers 10 1,400,000 50,000 108.4 Topic classification Table 1: Statistics on the datasets. “#Average words” indicates the average number of words per sample text. Word-based CNN (Kim, 2014) consists of an embedding layer that performs 50-dimensional word embeddings on 400-dimensional input vectors, an 1D-convolutional layer consisting of 250 filters of kernel size 3, an 1D-max-pooling layer, and two fully connected layers. This word-based classification model is used on all three datasets. Bi-directional LSTM consists of a 128dimensional embedding layer, a Bi-directional LSTM layer whose forward and reverse are respectively composed of 64 LSTM units, and a fully connected layer. This word-based classification model is used on IMDB dataset. Char-based CNN is identical to the structure in (Zhang et al., 2015) which includes two ConvNets. The two networks are both 9 layers deep with 6 convolutional layers and 3 fully-connected layers. This char-based classification model is used for AG’s News dataset. LSTM consists of a 100-dimensional embedding layer, an LSTM layer composed of 128 units, and a fully connected layer. This word-based classification model is used for Yahoo! Answers dataset. Column 3 in Table 2 demonstrates the classification accuracies of these models on original (clean) examples, which almost achieves the best results of the classification task on these datasets. 4.3 Attacking Methods We compare our PWWS 2 attacking method with the following baselines. All the baselines use WordNet to build the candidate synonym sets L. Random. We randomly select a synonym for each word in the original input text to replace, and keep performing such replacement until the classification output changes. Gradient. This method draws from FGSM (Goodfellow et al., 2015), which is previously proposed for image adversarial attack: x∗= x + ∆x = x + ϵ · sign (∇xJ (F, ytrue)) , (9) 2https://github.com/JHL-HUST/PWWS/ where J (F, ytrue) is the cost function used for training the neural network. For the sake of calculation, we will use the synonym that maximizes the change of prediction output ∆F(x) as the substitute word, where ∆F(x) is approximated by forward derivative: ∆F(x) = F x′ −F(x) ≈ x′ i −xi  ∂F(x) ∂xi . (10) This method using Eq. (10) is the main concept introduced in (Papernot et al., 2016b). Traversing in word order (TiWO). This method of traversing input sample text in word order finds substitute for each word according to Eq. (4). Word Saliency (WS). WS (Samanta and Mehta, 2017) sorts words in the input text based on word saliency in Eq. (6) in descending order, and finds substitute for each word according to Eq. (4). 4.4 Attacking Results We evaluate the merits of all above methods by using them to generate 2,000 adversarial examples respectively. The more effective the attacking method is, the more the classification accuracy of the model drops. Table 2 shows the classification accuracy of different models on the original samples and the adversarial samples generated by these attack methods. Results show that our method reduces the classification accuracies to the most extent. The classification accuracies on the three datasets IMDB, AG’s News, and Yahoo! Answers are reduced by an average of 81.05%, 33.62%, and 38.65% respectively. The effectiveness of the attack against multi-classification tasks is not as good as that for binary classification tasks. Our method achieves such effects by very few word replacements. Table 3 lists the word replacement rates of the adversarial examples generated by different methods. The rate refers to the number of substitute words divided by the total number of words in the original clean sample texts. It indicates that PWWS replaces the fewest words while 1090 Dataset Model Original Random Gradient TiWO WS PWWS IMDB word-CNN 86.55% 45.36% 37.43% 10.00% 9.64% 5.50% Bi-dir LSTM 84.86% 37.79% 14.57% 3.57% 3.93% 2.00% AG’s News char-CNN 89.70% 67.80% 72.14% 58.50% 62.45% 56.30% word-CNN 90.56% 74.13% 73.63% 60.70% 59.70% 56.72% Yahoo! Answers LSTM 92.00% 74.50% 73.80% 62.50% 62.50% 53.00% word-CNN 96.01% 82.09% 80.10% 69.15% 66.67% 57.71% Table 2: Classification accuracy of each selected model on the original three datasets and the perturbed datasets using different attacking methods. Column 3 (Original) represents the classification accuracy of the model for the original samples. A lower classification accuracy corresponds to a more effective attacking method. Dataset Model Random Gradient TiWO WS PWWS IMDB word-CNN 22.01% 20.53% 15.06% 14.38% 3.81% Bi-dir LSTM 17.77% 12.61% 4.34% 4.68% 3.38% AG’s News char-CNN 27.43% 27.73% 26.46% 21.94% 18.93% word-CNN 22.22% 22.09% 20.28% 20.21% 16.76% Yahoo! Answers LSTM 40.86% 41.09% 37.14% 39.75% 35.10% word-CNN 31.68% 31.29% 30.06% 30.42% 25.43% Table 3: Word replacement rate of each attacking method on the selected models for the three datasets. The lower the word replacement rate, the better the attacking method could be in terms of retaining the semantics of the text. Original Prediction Adversarial Prediction Perturbed Texts Positive Negative Ah man this movie was funny (laughable) as hell, yet strange. I like how they kept the shakespearian language in this movie, it just felt ironic because of how idiotic the movie really was. this movie has got to be one of troma’s best movies. highly recommended for some senseless fun! Confidence = 96.72% Confidence = 74.78% Negative Positive The One and the Only! The only really good description of the punk movement in the LA in the early 80’s. Also, the definitive documentary about legendary bands like the Black Flag and the X. Mainstream Americans’ repugnant views about this film are absolutely hilarious (uproarious)! How can music be SO diversive in a country of supposed liberty...even 20 years after... find out! Confidence = 72.40% Confidence = 69.03% Table 4: Adversarial example instances in the IMDB dataset with Bi-directional LSTM model. Columns 1 and 2 represent the category prediction and confidence of the classification model for the original sample and the adversarial examples, respectively. In column 3, the green word is the word in the original text, while the red is the substitution in the adversarial example. Original Prediction Adversarial Prediction Perturbed Texts Business Sci/Tech site security gets a recount at rock the vote. grassroots movement to register younger voters leaves publishing (publication) tools accessible to outsiders. Confidence = 91.26% Confidence = 33.81% Sci/Tech World seoul allies calm on nuclear (atomic) shock. south korea’s key allies play down a shock admission its scientists experimented to enrich uranium. Confidence = 74.25% Confidence = 86.66% Table 5: Adversarial example instances in the AG’s News dataset with char-based CNN model. Columns of this table is similar to those in Table 4. ensuring the semantic and syntactic features of the original sample remain unchanged to the utmost extent. Table 4 lists some adversarial examples generated for IMDB dataset with the Bi-directional LSTM classifier. The original positive/negative film reviews can be misclassified by only one synonym replacement and the model even holds a high degree of confidence. Table 5 lists some adversarial examples in AG’s News dataset with the char-based CNN. It also requires only one synonym to be replaced for the model to be misled to classify one type (Business) of news into another (Sci/Tech). The adversarial examples still convey the semantics of the original text such that humans do not recognize any change but the neural network classifiers are deceived. For more example comparisons between the ad1091 Dataset Model Examples Accuracy of model Accuracy of human Score[1-5] IMDB word-CNN Original 99.0% 98.0% 1.80 Adversarial 22.0% 93.0% 2.50 Bi-dir LSTM Original 86.0% 93.0% 1.70 Adversarial 12.0% 88.0% 2.08 AG’s News char-CNN Original 81.0% 63.9% 2.62 Adversarial 69.0% 58.0% 2.89 Table 6: Comparison with human evaluation. The fourth and fifth columns represent the classification accuracy of the model and human, respectively. The last column represents how much the workers think the text is likely to be modified by a machine. The larger the score, the higher the probability. versarial examples generated by different methods, see details in Appendix. Text classifier based on DNNs is widely used in NLP tasks. However, the existence of such adversarial samples exposes the vulnerability of these models, limiting their applications in securitycritical systems like spam filtering and fake news detection. 4.5 Discussions on Previous Works Yang et al. (2018) introduce a perturbationbased method called Greedy Attack and a scalable learning-based method called Gumbel Attack. They perform experiments on IMDB dataset with the same word-based CNN model, and on AG’s News dataset with a LSTM model. Their method greatly reduces the classification accuracy to less than 5% after replacing 5 words (Yang et al., 2018). However, the semantics of the replacement words are not constrained, as antonyms sometimes appear in their adversarial examples. Moreover, for instance, Table 3 in (Yang et al., 2018) shows that they change “... The plot could give a rise a must (better) movie if the right pieces was in the right places” to switch from negative to positive; and they change “The premise is good, the plot line script (interesting) and the screenplay was OK” to switch from positive to negative. The first sample changes the meaning of the sentence, while the second has grammatical errors. Under such condition, the perturbations could be recognized by humans. Gao et al. (2018) present a novel algorithm, DeepWordBug, that generates small text perturbations in the character-level for black-box attack. This method can cause a decrease of 68% on average for word-LSTM and 48% on average for char-CNN model when 30 edit operations were allowed. However, since their perturbation exists in the character-level, the generated adversarial examples often do not conform to the lexical constraint: misspelled words may exist in the text. For instance, they change a positive review of “This film has a special place in my heart” to get a negative review of “This film has a special plcae in my herat”. For such adversarial examples, a spell check on the input can easily remove the perturbation, and the effectiveness of such adversarial attack will be removed also. DeepWordBug is still useful, as we could improve the robustness in the training of classifiers by replacing misspelled word with out-of-vocabulary word, or simply remove misspelled words. However, as DeepWordBug can be easily defended by spell checking, we did not consider it as a baseline in our comparison. 5 Further Analysis This section provides a human evaluation to show that our perturbation is hard for humans to perceive, and studies the transferability of the generated examples by our methods. In the end, we show that using the generated examples for adversarial training helps improving the robustness of the text classification model. 5.1 Human Evaluation To further verify that the perturbations in the adversarial examples are hard for humans to recognize, we find six workers on Amazon Mechanical Turk to evaluate the examples generated by PWWS. Specifically, we select 100 clean texts in IMDB and the corresponding adversarial examples generated on word-based CNN. Then we select another 100 clean texts in IMDB and the corresponding adversarial examples generated on Bidirectional LSTM. For the third group, we select 100 clean texts from AG’s News and the corresponding adversarial examples generated on charbased CNN. For each group of date, we mix the clean data and generated examples for the workers to classify. To evaluate the similarity, we ask the workers to give scores from 1-5 to indicate the likelihood that the text is modified by machine. 1092 (a) Varying word replacement rates of the algorithms (b) Fixed word replacement rate of 10% Figure 1: Transferability of adversarial examples generated by different attacking methods on IMDB. The three color bars represent the average classification accuracies (in percentage) of the three new models on the adversarial examples generated by word-based CNN-1. The lower the classification accuracy, the better the transferability. Table 6 shows the comparison with human evaluation. The generated examples can cause misclassification on three different models, while the classification accuracy of humans is still very high comparing to their judgement on clean data. Since there are four categories for AG’s News, the classification accuracy of workers on this dataset is significantly lower than that on IMDB (binary classification tasks). Thus, we did not try human evaluation on Yahoo! Answers as there are 10 categories to classify. The likelihood scores of machine perturbation on adversarial examples are slightly higher than that on the original texts, indicating that the semantics of some synonyms are not as accurate as the original words. Nevertheless, as the accuracy of humans on the two sets of data are close, and the traces of machine modifications are still hard for humans to perceive. 5.2 Transferability The transferability of adversarial attack refers to its ability to reduce the accuracy of other models to a certain extent when the examples are generated on a specific classification model (Goodfellow et al., 2015; Szegedy et al., 2013). To illustrate this, we record the original wordbased CNN (described in Section 4.2) as wordbased CNN-1, and train three new proximity classification models on the IMDB dataset, labeled respectively as word-based CNN-2, word-based CNN-3 and Bi-directional LSTM network. Compared to word-based CNN-1, word-based CNN2 has an additional fully connected layer. Wordbased CNN-3 has the same network structure as CNN-1 except using GloVe (Pennington et al., 2014) as a pretrained word embedding. The network structure of Bi-directional LSTM is the one introduced in Section 4.2. When the adversarial examples generated by our method are transferred to word-based CNN2 or Bi-dir LSTM, the attacking effect is slightly inferior, as illustrated in Figure 1 (a). But note that the word replacement rate of our method on IMDB is only 3.81%, which is much lower than other methods (Table 3). When we use the same replacement ratio (say 10%) in the input text for all methods, the transferability of PWWS is significantly better than other methods. Figure 1 (b) illustrates that the word substitution order determined by PWWS corresponds well to the importance of the words for classification, and the transformation is effective across various models. 5.3 Adversarial Training Adversarial training (Shrivastava et al., 2017) is a popular technique mainly used in image classification to improve model robustness. To verify whether incorporating adversarial training would help improve the robustness of the test classifiers, we randomly select clean samples from the training set of IMDB and use PWWS to generate 4000 adversarial examples as a set A, and train the word-based CNN model. We then evaluate the classification accuracy of the model on the original test data and of the adversarial examples generated using various methods. Figure 2 (a) shows that the classification accuracy of the model on the original test set is improved after adversarial training. Figure 2 (a) illustrates that the robustness of the classification model continues to improve when more adversarial examples are added to the training set. 1093 (a) Accuracy on the original test set (b) Accuracy on the adversarial examples generated by various methods Figure 2: The result of adversarial training on IMDB dataset. The x-axis represents the number of adversarial examples selected from set A to join the original training set. The classification accuracies are on the original test set and the adversarial examples generated using various methods, respectively. 6 Conclusion We propose an effective method called Probability Weighted Word Saliency (PWWS) for generating adversarial examples on text classification tasks. PWWS introduces a new word substitution order determined by the word saliency and weighted by the classification probability. Experiments show that PWWS can greatly reduce the text classification accuracy with a low word substitution rate, and such perturbation is hard for human to perceive. Our work demonstrates the existence of adversarial examples in discrete input spaces and shows the vulnerability of NLP models using neural networks. Comparison with existing baselines shows the advantage of our method. PWWS also exhibits a good transferability, and by performing adversarial training we can improve the robustness of the models at test time. In the future, we would like to evaluate the attacking effectiveness and efficiency of our methods on more datasets and models, and do elaborate human evaluation on the similarity between clean texts and the corresponding adversarial examples. References Moustafa Alzantot, Yash Sharma, Ahmed Elgohary, Bo-Jhang Ho, Mani B. Srivastava, and Kai-Wei Chang. 2018. Generating natural language adversarial examples. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, Brussels, Belgium, October 31 - November 4, 2018, pages 2890–2896. Yinpeng Dong, Fangzhou Liao, Tianyu Pang, Hang Su, Jun Zhu, Xiaolin Hu, and Jianguo Li. 2018. Boosting adversarial attacks with momentum. In The IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pages 9185–9193. Javid Ebrahimi, Anyi Rao, Daniel Lowd, and Dejing Dou. 2018. Hotflip: White-box adversarial examples for text classification. In Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics, ACL 2018, Melbourne, Australia, July 15-20, 2018, Volume 2: Short Papers, pages 31–36. Ji Gao, Jack Lanchantin, Mary Lou Soffa, and Yanjun Qi. 2018. Black-box generation of adversarial text sequences to evade deep learning classifiers. In 2018 IEEE Security and Privacy Workshops, SP Workshops 2018, San Francisco, CA, USA, May 24, 2018, pages 50–56. Ian Goodfellow, Jonathon Shlens, and Christian Szegedy. 2015. Explaining and harnessing adversarial examples. In International Conference on Learning Representations. Yoon Kim. 2014. Convolutional neural networks for sentence classification. In Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing, EMNLP 2014, October 25-29, 2014, Doha, Qatar, A meeting of SIGDAT, a Special Interest Group of the ACL, pages 1746–1751. Alexey Kurakin, Ian Goodfellow, and Samy Bengio. 2016. Adversarial examples in the physical world. arXiv preprint arXiv:1607.02533. Jiwei Li, Xinlei Chen, Eduard H. Hovy, and Dan Jurafsky. 2016a. Visualizing and understanding neural models in NLP. In NAACL HLT 2016, The 2016 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, San Diego California, USA, June 12-17, 2016, pages 681–691. Jiwei Li, Will Monroe, and Dan Jurafsky. 2016b. Understanding neural networks through representation erasure. CoRR, abs/1612.08220. Bin Liang, Hongcheng Li, Miaoqiang Su, Pan Bian, Xirong Li, and Wenchang Shi. 2018. Deep text 1094 classification can be fooled. In Proceedings of the Twenty-Seventh International Joint Conference on Artificial Intelligence, IJCAI 2018, July 13-19, 2018, Stockholm, Sweden., pages 4208–4215. Andrew L. Maas, Raymond E. Daly, Peter T. Pham, Dan Huang, Andrew Y. Ng, and Christopher Potts. 2011. Learning word vectors for sentiment analysis. In The 49th Annual Meeting of the Association for Computational Linguistics: Human Language Technologies, Proceedings of the Conference, 19-24 June, 2011, Portland, Oregon, USA, pages 142–150. Anh Mai Nguyen, Jason Yosinski, and Jeff Clune. 2015. Deep neural networks are easily fooled: High confidence predictions for unrecognizable images. In IEEE Conference on Computer Vision and Pattern Recognition, CVPR 2015, Boston, MA, USA, June 712, 2015, pages 427–436. Nicolas Papernot, Patrick D. McDaniel, Somesh Jha, Matt Fredrikson, Z. Berkay Celik, and Ananthram Swami. 2016a. The limitations of deep learning in adversarial settings. In IEEE European Symposium on Security and Privacy, EuroS&P 2016, Saarbr¨ucken, Germany, March 21-24, 2016, pages 372–387. Nicolas Papernot, Patrick D. McDaniel, Ananthram Swami, and Richard E. Harang. 2016b. Crafting adversarial input sequences for recurrent neural networks. In 2016 IEEE Military Communications Conference, MILCOM 2016, Baltimore, MD, USA, November 1-3, 2016, pages 49–54. Jeffrey Pennington, Richard Socher, and Christopher D. Manning. 2014. Glove: Global vectors for word representation. In Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing, EMNLP 2014, October 25-29, 2014, Doha, Qatar, A meeting of SIGDAT, a Special Interest Group of the ACL, pages 1532–1543. Suranjana Samanta and Sameep Mehta. 2017. Towards crafting text adversarial samples. CoRR, abs/1707.02812. Motoki Sato, Jun Suzuki, Hiroyuki Shindo, and Yuji Matsumoto. 2018. Interpretable adversarial perturbation in input embedding space for text. In Proceedings of the Twenty-Seventh International Joint Conference on Artificial Intelligence, IJCAI 2018, July 13-19, 2018, Stockholm, Sweden., pages 4323– 4330. Ashish Shrivastava, Tomas Pfister, Oncel Tuzel, Joshua Susskind, Wenda Wang, and Russell Webb. 2017. Learning from simulated and unsupervised images through adversarial training. In CVPR, volume 2, page 5. Chuanbiao Song, Kun He, Liwei Wang, and John E Hopcroft. 2019. Improving the generalization of adversarial training with domain adaptation. In The Seventh International Conference on Learning Representations, New Orleans, Louisiana. Christian Szegedy, Wojciech Zaremba, Ilya Sutskever, Joan Bruna, Dumitru Erhan, Ian J. Goodfellow, and Rob Fergus. 2013. Intriguing properties of neural networks. CoRR, abs/1312.6199. Florian Tram`er, Alexey Kurakin, Nicolas Papernot, Ian Goodfellow, Dan Boneh, and Patrick McDaniel. 2018. Ensemble adversarial training: Attacks and defenses. In International Conference on Learning Representations. Eric Wong and J. Zico Kolter. 2018. Provable defenses against adversarial examples via the convex outer adversarial polytope. In Proceedings of the 35th International Conference on Machine Learning, ICML 2018, Stockholmsm¨assan, Stockholm, Sweden, July 10-15, 2018, pages 5283–5292. Puyudi Yang, Jianbo Chen, Cho-Jui Hsieh, Jane-Ling Wang, and Michael I. Jordan. 2018. Greedy attack and gumbel attack: Generating adversarial examples for discrete data. CoRR, abs/1805.12316. Xiang Zhang, Junbo Jake Zhao, and Yann LeCun. 2015. Character-level convolutional networks for text classification. In Annual Conference on Neural Information Processing Systems 2015, December 7-12, 2015, Montreal, Quebec, Canada, pages 649–657. Appendix In the Appendix, we add more comparisons between the adversarial examples generated by different methods, and comparisons between the original examples and the adversarial examples. 1095 Attack Perturbed Texts Methods Random The One and the Only (Solitary) ! Agreed this movie (pic) is well (comfortably) shot (hit), but it just (scarcely) makes no sense (mother) and no use (enjoyment) as to how they made 2 hours seem like 3 (7) just (scarcely) over a small (belittled) love (honey) story (taradiddle), this could have been an episode (sequence) of the bold (sheer) and the beautiful or the o.c, in short please don’t watch (learn) this movie (pic) because there is a song every 5 minutes just to wake (stir) you up from you’re sleep (quietus), i gave this movie (pic) 1/10! cause (induce) that was the lowest, and no this is not based completely on a true story, more than half of it is made up. I repeat the direction of photography is 7 or 8 out of 10, but the movie is just a little too much, the actor’s nasal voice just makes me want to go blow my nose. Unless you are a real him mesh fan this movie is a huge no-no. Confidence = 88.14% Gradient The One and the Only (Solitary) ! Agreed this movie (pic) is well (easily) shot (hit), but it just (scarcely) makes no sense (gumption) and no use (enjoyment) as to how they made 2 hours seem like 3 (7) just (simply) over a small (belittled) love (honey) story (taradiddle), this could have been an episode (sequence) of the bold (bluff) and the beautiful or the o.c, in short please don’t watch (learn) this movie (pic) because there is a song every 5 minutes just to wake (stir) you up from you’re sleep (quietus), i gave this movie (pic) 1/10! cause (induce) that was the lowest, and no this is not based completely on a true story, more than half of it is made up. I repeat the direction of photography is 7 or 8 out of 10, but the movie is just a little too much, the actor’s nasal voice just makes me want to go blow my nose. Unless you are a real him mesh fan this movie is a huge no-no. Confidence = 89.49% TiWO The One and the Only (Solitary) ! Agreed this movie (film) is well (easily) shot (hit), but it just (simply) makes no sense and no use (manipulation) as to how they made 2 hours seem like 3 (7) just (simply) over a small (humble) love (passion) story (level), this could have been an episode (sequence) of the bold (sheer) and the beautiful or the o.c, in short please don’t watch (keep) this movie (film) because there is a song every 5 minutes just to wake you up from you’re sleep (quietus), i gave this movie (motion) 1/10 (7)! cause (induce) that was the lowest, and no this is not based completely on a true story, more than half of it is made up. I repeat the direction of photography is 7 or 8 out of 10, but the movie is just a little too much, the actor’s nasal voice just makes me want to go blow my nose. Unless you are a real him mesh fan this movie is a huge no-no. Confidence = 57.76% WS The One and the Only (Solitary) ! Agreed this movie is well shot (hit), but it just (simply) makes no sense and no use as to how they made 2 hours seem like 3 just over a small (belittled) love (passion) story (taradiddle), this could have been an episode of the bold and the beautiful or the o.c, in short please don’t watch this movie because there is a song every 5 minutes just to wake you up from you’re sleep (quietus), i gave this movie (motion) 1/10! cause (induce) that was the lowest, and no this is not based (found) completely (wholly) on a true story (level), more than half of it is made up. I repeat the direction of photography (picture) is 7 or 8 (7) out of 10 (7), but the movie is just a little too much, the actor’s nasal voice just makes me want to go blow my nose (nozzle). Unless you are a real him mesh fan this movie is a huge no-no. Confidence = 50.04% PWWS The One and the Only! Agreed this movie is well shot, but it just makes no sense and no use as to how they made 2 hours seem like 3 just over a small love story, this could have been an episode of the bold and the beautiful or the o.c, in short please don’t watch this movie because there is a song every 5 minutes just to wake you up from you’re sleep, i gave this movie 1/10 (7)! cause that was the lowest, and no this is not based completely on a true story, more than half of it is made up. I repeat the direction of photography is 7 or 8 out of 10, but the movie is just a little too much, the actor’s nasal voice just makes me want to go blow my nose. Unless you are a real him mesh fan this movie is a huge no-no. Confidence = 89.77% Table 7: Adversarial examples generated for the same clean input text using different attack methods on wordbased CNN. We select a clean input text from the IMDB. The correct category of the original text is negative, and the classification confidence of word-based CNN is 82.77%. The adversarial examples generated by all methods succeeded in making the model misclassify from negative class into positive class. There is only one word substitution needed in our approach(PWWS) to make the attack successful, and it also maintains a high degree of confidence in the classification of wrong class. 1096 Original Adversarial Perturbed Texts Prediction Prediction Positive Negative This is a great (big) show despite many negative user reviews. The aim of this show is to entertain you by making you laugh. Two guys compete against each other to get a girl’s phone number. Simple. The fun in this show is watching the two males try to accomplish their goal. Some appear to hate the show for various reasons, but I think, they misunderstood this as an ”educational” show on how to pick up chicks. Well it is not, it is a comedy show, and the whole point of it is to make you laugh, not teach you anything. If you didn’t like the show, because it doesn’t teach you anything, don’t watch it. If you don’t like the whole clubbing thing, don’t watch it. If you don’t like socializing don’t watch it. This show is a comical show. If you down by watching others pick up girls, well its not making you laugh, so don’t watch it. If you are so disappointed in yourself after watching this show and realizing that you don’t have the ability to ”pick-up” girls, there is no reason to hate the show, simply don’t watch it!” Confidence Confidence = 59.56% = 87.76% Positive Negative I have just watched the season 2 finale of Doctor Who, and apart from a couple of dull episodes this show is fantastic (tremendous). Its a sad loss that we say goodbye to a main character once again in the season final but the show moves on. The BBC does need to increase the budget on the show, there are only so many things that can happen in London and the surrounding areas. Also some of the special effects all though on the main very good, on the odd occasion do need to be a little more polished. It was a huge gamble for the BBC to bring back a show that lost its way a long time ago and they must be congratulated for doing so. Roll on to the Christmas 2006 special, the 2005 Christmas special was by far the best thing on television.” Confidence Confidence = 65.10% = 60.03% Negative Positive The One and the Only! Agreed this movie is well shot, but it just makes no sense and no use as to how they made 2 hours seem like 3 just over a small love story, this could have been an episode of the bold and the beautiful or the o.c, in short please don’t watch this movie because there is a song every 5 minutes just to wake you up from you’re sleep, i gave this movie 1/10 (7)! cause that was the lowest,and no this is not based completely on a true story, more than half of it is made up. I repeat the direction of photography is 7 or 8 out of 10, but the movie is just a little too much, the actor’s nasal voice just makes me want to go blow my nose. Unless you are a real him mesh fan this movie is a huge no-no. Confidence Confidence = 81.73% = 89.77% Negative Positive In all, it took me three (7) attempts to get through this movie. Although not total trash, I’ve found a number of things to be more useful to dedicate my time to, such as taking off my fingernails with sandpaper. The actors involved have to feel about the same as people who star in herpes medication commercials do; people won’t really pay to see either, the notoriety you earn won’t be the best for you personally, but at least the commercials get air time.The first one was bad, but this gave the word bad a whole new definition, but it does have one good feature: if your kids bug you about letting them watch R-rated movies before you want them to, tie them down and pop this little gem in. Watch the whining stop and the tears begin. ;) Confidence Confidence = 69.54% = 79.15% Negative Positive This is a very strange (unusual) film, with a no-name cast and virtually nothing known about it on the web. It uses an approach familiar to those who have watched the likes of Creepshow in that it introduces a trilogy of so-called ”horror” shorts and blends them together into a connecting narrative of the people who are involved in the segments getting off a bus. There is a narrator who prattles on about relationships, but his talking adds absolutely nothing to the mix at all and just adds to the confusion. As for the stories themselves, well.. I swear I have not got a clue why this movie got an 18 (7) certificate in the UK, which would bring it into line with the likes of Nightmare On Elm Street and The Exorcist. Nothing here is even remotely scary.. there is no gore, sex, nudity or even a swear word to liven things up, this is the kind of thing you could put out on Children’s TV and no-one would bat an eyelid. I can only think if it had got the rating it truly deserved (a PG) no serious horror fan would be seen dead with it, so the distributor probably buffeted the BBFC until they relented. Anyway, here are the 3 (7) tales in summary: 1. A man becomes dangerously obsessed with his telekinetic car to the point of alienating his fiancee. 2. A man who lives in a filthy apartment is understandably freaked out when a living organism evolved from his six-month old tuna casserole. 3. A woman thinks she has found the perfect man through a computer dating service.. that is until he starts to act weird.. And there you have it. Some of them are pretty amusing due to their outlandish premises (my favourite being number 2) but you get the feeling they were meant to be a) frightening and b) morality plays, unfortunately they fail miserably on both counts. To sum up then, this flick is an obscure curiosity.. for very good reasons.” Confidence Confidence = 83.24% = 52.19% Table 8: More adversarial examples instances in IMDB with word-based CNN model. The last three instances in this table show the role of named entities(NEs) in PWWS. The true label of the last three examples are all negative, and we use most frequently occurring cardinal number 7 in the dictionary of positive class as an NEadv. The adversarial examples can be generated by replacing few cardinal number in the original input text with 7. 1097 Original Adversarial Perturbed Texts Prediction Prediction Sci/Tec Business surviving biotech (biotechnology)’s downturns. charly travers offers advice on withstanding the volatility (excitability) of the biotech sector. Confidence Confidence = 45.46% = 43.19% Sci/Tech World e-mail scam targets police chief (headman). wiltshire police warns about ”phishing” after its fraud squad chief was targeted. Confidence Confidence = 36.85% = 43.21% World Sports post-olympic greece tightens purse, sells family silver to fill budget holes (afp). afp - squeezed by a swelling public deficit (shortage) and debt following last month’s costly athens olympics, the greek government said it would cut defence spending and boost revenue by 1.5 billion euros (1.84 billion dollars) in privatisation receipts. Confidence Confidence = 45.73% = 38.48% Sci/Tech Sports prediction unit helps forecast (calculate) wildfires (ap). ap - it’s barely dawn when mike fitzpatrick starts his shift with a blur of colorful maps, figures and endless charts, but already he knows what the day will bring. lightning will strike in places he expects. winds will pick up, moist places will dry and flames will roar. Confidence Confidence = 36.08% = 29.73% Table 9: Adversarial example instances in the AG’s News dataset with char-based CNN model. Original Adversarial Perturbed Texts Prediction Prediction Business Games hess truck values at a garage sale im selling some extra hess trucks at a garage sale i have all years in boxes between except for if anyone can give me price recomendations or even a good (unspoilt) offer before saturday it would really be apprechiated look on e bay to see what they are fetching there my guess would be that the issue could go for about us and the most recent could be about (well) more than what you paid Filling station Ford Motor Company Truck Supply and demand Pickup truck Illegal drug trade Best Buy Supermarket Value added tax (taxation) Microeconomics DVD Labor theory of value Postage stamps and postal history of the United States Price discrimination Auction Investment bank Costco Law of value $ale of the Century MMORPG Tax CPU (mainframe) cache Mutual fund Islamic banking Ford Thunderbird Ford F-Series Sales promotion Napoleon Dynamite Internet fraud The Market for Lemons Argos (retailer) Berkshire Hathaway Gasoline (Petrol) Bond Car and Driver Ten Best First-sale doctrine Short selling UK Singles Chart Exchange value Altair 8800 Contract Card Sharks Life insurance Endgame Deal or No Deal Topps Ashton-Tate Hybrid vehicle Externality Google Boeing 747 Wheel of Fortune US and Canadian license plates Home Box Office Day trading Chevrolet El Camino Branch predictor Temasek Holdings Toyota Camry The Standard (Monetary) Privatization Protectionism Car (Railroad) boot (rush) sale Land Rover (Series/Defender (Shielder)) Long Beach, California Labor-power Capital accumulation BC Rail ITunes Music Store Moonshine Dead Kennedys Prices of production Massachusetts Bay Transportation Authority National Lottery E85 MG Rover Group Ford Falcon Fair market value Wayne Corporation Garage rock Donald Trump Paris Hilton DAF Trucks Economics Firefighter Commodity Mortgage My Little Pony (Jigger) Electronic Arts (Graphics) Sport utility vehicle Computer and video (television) games Mitsubishi Motors Corporation American Broadcasting Company Videocassette recorder Electronic commerce Dodge Charger Alcohol fuel Hudson’s Bay Company Biodiesel. and and Finance Recreation Confidence Confidence = 10.04% = 10.01% Table 10: Adversarial example instances in the Yahoo! Answers dataset with LSTM model.
2019
103
Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 1098–1108 Florence, Italy, July 28 - August 2, 2019. c⃝2019 Association for Computational Linguistics 1098 Heuristic Authorship Obfuscation Janek Bevendorff ∗ Martin Potthast† Matthias Hagen‡ Benno Stein∗ ∗Bauhaus-Universität Weimar †Leipzig University ‡Martin-Luther-Universität Halle-Wittenberg <first>.<last>@uni-{weimar, leipzig}.de <first>.<last>@informatik.uni-halle.de Abstract Authorship verification is the task of determining whether or not two texts were written by the same author. This paper deals with the adversary task, called authorship obfuscation: Preventing verification by altering a tobe-obfuscated text. We introduce an approach that (1) models writing style difference as the Jensen-Shannon distance between the character n-gram distributions of texts, and (2) manipulates an author’s subconsciously encoded writing style in a sophisticated manner using heuristic search. To obfuscate, we explore the huge space of textual variants in order to find a paraphrased version of the to-be-obfuscated text that has a sufficient Jensen-Shannon distance at minimal costs in terms of text quality loss. We analyze, quantify, and illustrate the rationale of this approach, define paraphrasing operators, derive obfuscation thresholds, and develop an effective obfuscation framework. Our authorship obfuscation approach defeats state-of-the-art verification approaches, including unmasking and compression models, while keeping text changes at a minimum. 1 Introduction Can the authorial style of a text be consistently manipulated? More than a century worth of research on stylometry and authorship analysis could not produce a reliable approach to do so manually. In the context of computational authorship obfuscation, a handful of approaches have achieved some limited success but are still rather insufficient. Rulebased approaches are neither flexible, nor is stylometry understood well enough to compile rule sets that specifically target author style. Monolingual machine translation-based approaches suffer from a lack of training data, whereas applying multilingual translation in a cyclic manner as a workaround has proved to be ineffective. In addition, none of the existing approaches offers a means to control the result quality. Given recent advances in controlled text generation, it stands to reason that a lot more can be achieved. In this paper, we depart from the mentioned obfuscation paradigms and, for the first time, cast author obfuscation as a heuristic search problem. Given a to-be-obfuscated text, we search for a costminimum sequence of tailored paraphrasing operations that achieve a significant increase of the text’s style distance to other texts from the same author under a generic writing style representation; costs accrue through operations in terms of their estimated text quality reduction. By designing a hybrid search strategy that neglects admissibility only in the pooling phase, we obtain a significant reduction of the exponentially growing search space that is induced by the paraphrasing operators, enabling the use of informed search algorithms. Moreover, we developed a sophisticated framework to deal with the conflicting objectives that naturally arise with such kind of complex text synthesis tasks: a compact representation of the search space of paraphrased text variants, and an effective and efficient, non-monotonic exploration of this search space.1 Our key contributions are a greedy obfuscation approach that maximizes obfuscation gain per operation (Section 3); based on that, an obfuscation heuristic that reconciles obfuscation gain with text quality loss (Section 4); as well as an extensive comparative evaluation (Section 5). Relevant code and research data is released publicly on GitHub.2 2 Related Work Authorship analysis dates back over 120 years (Bourne, 1897) and has mostly dealt with authorship attribution (given a text of unknown authorship and texts from known candidate authors, attribute 1Up to 10,000 text variants per second on a standard PC. 2Code and data: https://github.com/webis-de/acl-19 1099 the unknown text to its true author among the candidates). More recently, the task of authorship verification attracted a lot of interest (given a text of unknown authorship and a set of texts from one known author, verify whether the unknown text is written by that author) since it lies at the heart of many authorship-related problems. Systematic reviews on authorship analysis have been contributed by Juola (2006) and Stamatatos (2009) and the effectiveness of character 3-grams today is “folklore knowledge,” albeit not systematically proven. Still, a complete list of stylometric features has not been compiled to date. Abbasi and Chen (2008) proposed writeprints, a set of over twenty lexical, syntactic, and structural text feature types, which has gained some notoriety within attribution, verification, but also for “anonymizing” texts (Zheng et al., 2006; Narayanan et al., 2012; Iqbal et al., 2008; McDonald et al., 2012). Instead of relying on a rich feature set, Zhao et al. (2006) only extract POS tag distributions and interpret style differences as measurable by the Kullback-Leibler divergence. Teahan and Harper (2003) and Khmelev and Teahan (2003) use compression as an indirect means to measure stylistic difference; later adapted and improved by Halvani et al. (2017). Koppel and Schler (2004) developed the unmasking approach based on the 250 most frequent function words, which are iteratively removed, effectively reducing the differentiability between the texts. The idea behind this approach is that texts written by the same author only differ in few superficial features. By removing those superficial features, differentiability between texts by the same author is expected to degrade faster than for texts written by different authors. Among the first to tackle authorship obfuscation were Rao and Rohatgi (2000), who used cyclic machine translation. Later Brennan et al. (2012) found that machine translation is ineffective and due to its blackbox character also rather uncontrollable. Instead, Xu et al. (2012) proposed within-language machine translation to translate directly between styles. The practicality of this approach, however, is diminished by the general lack of large-scale parallel training data. Another obfuscation approach by Kacmarcik and Gamon (2006) directly targets Koppel and Schler’s unmasking. By iteratively removing the most discriminatory text features, the classification performance of an unmasking verifier is degraded—at the cost of rather unreadable texts. From 2016 to 2018, a shared task on authorship obfuscation was organized at PAN (Potthast et al., 2018). Some of the seven participating teams suggested rather conservative rule-based approaches that do not change a text sufficiently to obfuscate authorship against most state-of-the-art verifiers but other obfuscators “fooled” several verifiers, yet again, generating rather unreadable texts. To score high in terms of text quality and obfuscation performance, the shared task organizers asked for approaches that more carefully paraphrase a text (i.e., the meaning should stay the same and the text should still be readable). Our new authorship obfuscation approach is inspired by Stein et al. (2014)’s heuristic paraphrasing idea for “encoding” an acrostic in a given text and by Kacmarcik and Gamon’s observation that changing rather few text passages may successfully obfuscate authorship. 3 Greedy Obfuscation We approach obfuscation from a verification perspective: Given texts from the same author, one of which is not publicly known to be written by that author, the goal is to paraphrase that text so that verification attempts against the other texts fail. In this setting, the key element of our heuristic obfuscation approach is a basic, yet powerful distributional representation of writing style: the Jensen-Shannon distance of the character trigram frequency distribution of the to-be-obfuscated text compared to the others. This model serves three purposes at once: (1) as a stopping criterion, (2) as a primary selection criterion for parts of the text that will yield the highest obfuscation gains if changed, and, (3) as part of our heuristic enabling informed search, which reconciles obfuscation gain with potential text quality loss. In what follows, we formally motivate these dimensions. 3.1 Measuring Stylistic Distance In order to know when to stop obfuscating a text we require a style distance measure. Once a text has been changed sufficiently and its style distance to other texts from the same author exceeds a given threshold, the obfuscator terminates.3 By utilizing character trigram frequencies to represent texts, we employ one of the most versatile 3Another possibility is to stop once the decision of existing verifiers switches to different-authors. However, this would introduce many more hyperparameters and biases regarding the verifiers, let alone the prohibitive runtime overhead. 1100 yet simple features available for authorship analysis, encoding many aspects of authorial style at the same time including vocabulary, morphology, and punctuation. Based on this representation, we consider the well-known Kullback-Leibler divergence (KLD) as a style distance measure: KLD(P∥Q) = X i P[i] log P[i] Q[i] , (1) where P and Q are discrete probability distributions corresponding to the relative frequencies of character trigrams in the to-be-obfuscated text and the known texts respectively. For true probability distributions, the KLD is always non-negative. The KLD has shortcomings. First, it is asymmetric, so it is not entirely clear which character distribution should be P and which should be Q when comparing texts. Second, the KLD is defined only for distributions P and Q where Q[i] = 0 implies P[i] = 0. Conversely, P[i] = 0 yields a zero summand. Since we want to avoid reducing or skewing the measure further by “subsetting” or smoothing the trigrams, we resort to the Jensen-Shannon distance JS∆(Endres and Schindelin, 2003) in lieu of the KLD. The JS∆is a metric based on the symmetric Jensen-Shannon divergence (JSD) that is defined as JSD(P∥Q) = KLD(P∥M) + KLD(Q∥M) 2 , (2) with M = P + Q 2 . (3) Introducing the artificial distribution M circumvents the KLD’s problem of samples of one distribution being unknown in the other. Since M[i] can never be 0 for any i with P[i] + Q[i] > 0, all summands of either KLD(P∥M) or KLD(Q∥M) must also be non-zero. Using the base-2 logarithm in the KLD, the JSD is [0, 1]-bounded. The JS∆metric is derived as JS∆(P, Q) = q 2 · JSD(P∥Q) . (4) 3.2 Adaptive Obfuscation Thresholds During pilot experiments on our training data, we observed that a fixed JS∆threshold as the obfuscation target is a bad idea: it leads to over- or underobfuscation for text pairs that have an a-priori high or low style distance. It also turned out that JS∆ is inversely correlated with text length: pairs of 1.0 Text length (characters) JS∆ (JS distance) 1.4 1.2 0.8 0.6 27 214 213 212 211 210 29 28 different authors (93) same author (89) ε0 ε0.5 Figure 1: JS∆in our training data over text length. Each line corresponds to a text pair The straight lines indicate the 0th and the 50th percentiles of distances within the true different-authors cases. long texts are less distant to each other than pairs of short texts, since, the shorter a text the sparser and noisier is its trigram distribution. This even holds if the texts are written by the same author. Figure 1 plots the JS∆over the text length in our training data, revealing an approximately logarithmic relationship. The most interesting observation is the almost length-invariant spread of the resulting curves. Moreover, depending on their class, the curves tend to converge towards the upper / lower bounds of this spread with growing length, thus being visibly separated. Assuming that the observed JS∆-to-length relationship generalizes to other text pairs of similar length—a hypothesis which merits further investigation in future work—, we measure style distance in JS∆@L (Jensen-Shannon distance at length) and fit threshold lines to define obfuscation levels. Table 1 details the obfuscation levels εk corresponding to a linear least-squares fit on the logarithmic scale through a given level’s k-th percentile of the distribution of JS∆in the different-authors class; the 0th percentile ε0 and the 50th percentile ε0.5 are displayed in Figure 1. The ε0 threshold serves as an obfuscation baseline, indicating a same-author case as unobfuscated, if the JS∆between its documents is below this threshold. Otherwise, we call the obfuscation moderate, strong, stronger, and over-obfuscated, depending on the threshold the JS∆exceeds. Regarding the line fit coefficients given in Table 1, the gradients of higher ε thresholds are slightly steeper, providing further evidence of the convergence rate of different-authors cases. The ε0 threshold line will cross the x axis for text lengths 1101 Threshold Obfuscation level Slope Intercept < ε0 No Obfuscation n / a n / a ≥ε0 Moderate Obfuscation – 0.099 1.936 ≥ε0.5 Strong Obfuscation – 0.103 2.056 ≥ε0.7 Stronger Obfuscation – 0.104 2.083 > ε0.99 Over-obfuscation – 0.107 2.168 Table 1: Obfuscation levels and their log-scale polynomial fit coefficients on our training corpus. of x ≈219.5 characters. Since negative distances are not sensible, such book-sized texts may be split into smaller chunks, which then can be obfuscated individually. Note that we were able to reproduce these threshold observations on the PAN 2014 novels corpus (Stamatatos et al., 2014), albeit obtaining slightly different coefficients. In practice, we recommend training the coefficients on an appropriate corpus matching genre and register of the to-be-obfuscated texts. 3.3 Ranking Trigrams for Obfuscation Our key idea to yield a strong obfuscation (compared to other texts from the same author) is to iteratively change the frequency of those trigrams of the to-be-obfuscated text for which the positive impact on JS∆is maximum. In each iteration we rank the trigrams by their influence on JS∆via their partial KLD derivative, assuming that probability distribution Q is to be obfuscated: ∂ ∂Q[i]  P[i] log2 P[i] Q[i]  = − P[i] Q[i] ln 2 . (5) Omitting constants, we get the rank-equivalent RKL(i) = P[i] Q[i] . (6) RKL gets larger with smaller Q[i]. I.e., a single obfuscation step boils down to removing one occurrence of the most influential trigram from the to-be-obfuscated text. This can be done naively by simply “cutting it out” (which we tried as a proofof-concept), or, more sensibly, via a targeted paraphrasing operation replacing a text passage with the trigram by another semantically equivalent text passage without the trigram. Then, the trigrams are re-ranked and the procedure is repeated until JS∆ exceeds the desired obfuscation threshold. We call this strategy obfuscation by reduction. Reversing the roles of P and Q yields an addition strategy, which we leave for future work. The above described greedy obfuscation effectively hindered verification in our pilot experiments. However, the naive cut-it-out variant results in rather unreadable texts, and, it may be easily “reverse engineered” by an informed verifier. Even with more sophisticated paraphrasing operations, a reverse-engineering attack against the greedy strategy seems plausible. Thus, we suggest to augment the greedy approach with an informed search, which is introduced in the next section. 4 Heuristic Search for Obfuscation An author of a to-be-obfuscated text does obviously not wish her text to be “foozled” due to obfuscation (e.g., by naively cutting out trigrams). Actually, the text has to convey the same message as before and, ideally, it should look “inconspicuous” to an extent that readers do not suspect tampering (Potthast et al., 2016). However, automatic paraphrasing is still in its infancy: Beyond synonym substitution, paraphrasing operators targeting single words have hardly been devised so far. Still, the paraphrasing operators we are looking for do not have to alter a text substantially, which enables us to better estimate an operator’s negative impact on text quality. Furthermore, similar to the presented greedy obfuscation, we can stop modifying a text when the desired obfuscation threshold is reached, which renders our approach “minimally invasive.” The optimization goals can be summarized as follows: 1. Maximize the obfuscation as per the JS∆beyond a given εk without “over-obfuscating.” 2. Minimize the accumulated text quality loss from consecutive paraphrasing operations. 3. Minimize the number of text operations. Heuristic search is our choice to tackle this optimization problem. We will pay attention to admissibility for two reasons: (1) to understand (in terms of modeling) the nature of the problem, and (2) to be able to compute an optimum solution if time and space constraints permit. However, due to the exponential size of the induced state space (text versions as nodes, paraphrasing operators as edges), we may give up admissibility while staying within acceptable error bounds. In the following, we derive an admissible obfuscation heuristic and suggest a small, viable set of basic paraphrasing operators as an initial proof of concept. 4.1 An Admissible Obfuscation Heuristic Let h(n) denote a heuristic estimating the optimal cost for reaching a desired obfuscation threshold 1102 from node n, and let g(n) denote the path costs to n starting at the original text node s. Applying a paraphrasing operator has a highly non-linear effect on text quality (some changes are inconspicuous, others are not) and may also restrict the set of applicable operators (in the same text). For instance, applying the same operator a third time in a row may entail higher (quality) costs compared to applying it for the first time. This means that different paths from s to n can come with different estimations for the rest cost h(n)—in a nutshell, the parent discarding property may not hold (Pearl, 1984). A similar effect, but rooted in a different cause, results from the observation that some authors’ texts are easier to be obfuscated than others. We can address both issues and re-install the conditions for parent discarding and admissible search by updating the operator costs for future application beyond node n, such that g(n) turns into “normalized path costs.” Based on both the desired obfuscation threshold ε and the JS distance JS∆n of the text at node n to the other text(s) from the same author, we define the prior heuristic as hprior(n) = ε −JS∆n. (7) The normalized path costs gnorm are defined as the cost-to-gain ratio of the accumulated path costs g(n) to total JS∆change from start node s: gnorm(n) = g(n) JS∆n −JS∆s . (8) Finally, the heuristic h(n) is defined as the product of hprior(n) and gnorm(n): h(n) = (ε −JS∆n) · g(n) JS∆n −JS∆s . (9) The prior heuristic guarantees convergence towards zero as we approach a goal node that exceeds the obfuscation threshold ε, while the normalized path costs determine the slope of the heuristic. Consistency and Admissibility A heuristic h(n) is admissible if it does not exceed h∗(n), the true cost of reaching an optimum goal via state n, for all n in the search space. Monotonicity h(n) ≤c(n, n′)+h(n′) is a sufficient condition for admissibility, yet easier to show. Rewriting it as −h(n′) + h(n) ≤g(n′) −g(n), and inserting in the heuristic Equation 9 yields −(ε−JS∆n′) · g(n′) JS∆n′ −JS∆s +(ε−JS∆n) · g(n) JS∆n −JS∆s ≤g(n′)−g(n) . Defining ¯g(n) = JS∆n −JS∆s as change function and inserting previous definitions we get −hprior(n′) · g(n′) ¯g(n′) −−hprior(n) · g(n) ¯g(n) ≤g(n′) −g(n) . We know hprior(n) to be monotonically decreasing, inverse to ¯g(n), and converging towards zero as we approach a goal. If the cost and change functions g(n) and ¯g(n) are equivalent up to scale, they cancel out each other (up to scale), the slope of their quotient becomes zero, and the inequality turns into equality. Otherwise, if g(n) dominates ¯g(n), the inequality still holds. Though, if ¯g(n) dominates g(n), the sign of the quotient’s gradient flips (as can be proved by the quotient rule), breaking the inequality and violating consistency. But since JS∆ is bounded by √ 2 globally, the change function ¯g(n) cannot be superlinear. Limitations of our argument: (1) occasionally ¯g(n) can locally dominate g(n), and (2) both functions are presumed differentiable at n. In practice, the latter may hardly ever be true as texts are noisy, text operation side effects are unpredictable, and, the cumulative change function is not guaranteed to be monotonic. Still, step costs c(n, n′) will never be negative, which makes g(n) monotonic but not necessarily differentiable. Thus, the heuristic function will not be fully consistent and may even overestimate. In a practical scenario we can directly control the cost but not the change function, so we will have to deal with problems of overestimation and local optima. Generally, the first few steps of a search path are the most problematic since with little prior information the heuristic has to extrapolate based on very few data points, but is still expected to accurately estimate the remaining costs. Hence, an early heuristic is particularly susceptible to noise and can only give a coarse estimate. With more cumulative cost and change information available, the heuristic will stabilize towards the mean cost-gain proportion and eventually converge. This stabilization occurs quickly. In real application scenarios, we keep overestimation at a minimum or even avoid it at all and therefore obtain an approximately admissible heuristic due to the JS∆’s boundedness. 1103 4.2 Search Space Challenges Given a longer text (one page or more), the number of potential operator applications is high. The most direct way to expand a node is to generate a successor with each applicable operator for each occurrence of each selected n-gram, but this will inevitably result in an immense number of very similar states with identical costs and almost identical JS∆change. I.e., the main challenge is to find a sensible middle ground between accepting a non-optimal solution too quickly or not finding a solution at all. Recall that one can easily turn the A* search into a depth-first or breadth-first search by making successor generation too cheap or too costly: depth-first search will always find a (nonoptimal) solution after a sufficient number of operations, while breadth-first will never terminate before running out of memory. We can accept a near-optimal solution, so selecting one or two occurrences of an n-gram (instead of all) will be sufficient. A potential problem is that the applicability of a high-quality operator is often restricted. However, one can increase the application probability by selecting not only the top-ranked n-gram but a small number of different near-top n-grams. This way, we have multiple highimpact n-grams with different contexts to work with, and we increase the chances of applying the operator opening alternative paths for the search. In practice, JS∆change is not a monotonic function and steepest-ascent hill climbing does not guarantee an overall lowest-cost path. Thus, we applied each operator to two occurrences of the top ten n-grams and selected from these (up to 140 successors) six randomly for expansion. However, even with only six successors we still generate millions of nodes very quickly and will eventually run out of memory without finding a solution. Fortunately, we can assume that exploring more neighbors will not produce much better results after a while, so we can restart the search from a few promising nodes and still discard other open nodes. 4.3 Paraphrasing Operators Our prototype employs the seven basic text operators shown in Table 2. These are to be understood as a pilot study, more state-of-the-art text generation operators can be added easily. The most versatile yet most disruptive basic modification are (1) the removal of an n-gram, and (2) flipping two of its (or adjacent) characters. Such operations Operator name Cost value (1) n-gram removal 40 (2) Character flips 30 Context-free synonyms 10 Context-free hypernyms 6 Context-dependent replacement 4 Character maps 3 Context-dependent deletion 2 Table 2: Implemented text operators and their assigned step costs in our heuristic obfuscation prototype. only are a last resort, and we hence set their costs much higher than those of other operators. As steps towards real paraphrasing, we also perform context-free synonym and hypernym replacement based on WordNet (Miller, 1995) as well as contextdependent replacements and deletions using the word 5-gram model of Netspeak (Stein et al., 2010). Also, a map of similar punctuation characters indicates inconspicuous character swaps. 5 Evaluation To evaluate our approach, we report on: (1) an efficiency comparison of greedy versus heuristic obfuscation, (2) an effectiveness analysis against well-known authorship verification approaches (unmasking, compression-based models, and PAN participants), as well as (3) a review and discussion of an example obfuscated text. Our experiments are based on PAN authorship corpora and our new Webis Authorship Verification Corpus 2019 of 262 authorship verification cases (Bevendorff et al., 2019), half of them sameauthor cases, the other half different-authors cases (each a pair of texts of about 23,000 characters / 4,000 words). Instead of the more particular genres studied at PAN, our new corpus contains longer texts and more modern literature from Project Gutenberg. We also took extra care to cleanse the plain text, unified special characters, and removed artifacts; in particular, we ensured that no author appears in more than one case. The training-test split is 70-30 so as to have a decent training portion. The corpus is released alongside the code of our search framework and other research data. 5.1 Search Over Greedy Obfuscation Table 3 contrasts the efficiency of the greedy obfuscation with that of our heuristic search approach, measured in terms of medians of total text operations and path costs. Heuristic search achieves a decrease of operations of up to 19% for texts that 1104 Efficiency Cases Median Subset # Greedy A* Gain Total operations all 41 148 145 −2 % 1+ ops 28 241 202 −16 % 100+ ops 21 291 236 −19 % Path costs all 41 5,960 1,968 −67 % 1+ ops 28 9,680 2,712 −72 % 100+ ops 21 11,680 2,935 −75 % Table 3: Efficiency of greedy obfuscation vs heuristic obfuscation for an obfuscation threshold of ε0.5. Confidence Unobfuscated Obfuscated Hyperplane Classified Effectiveness Classified Effectiveness threshold cases [%] Prec. Rec. cases [%] Prec. Rec. 0.8 11.3 1.00 0.17 2.5 1.00 0.02 0.7 15.0 1.00 0.24 6.2 1.00 0.05 0.6 18.8 1.00 0.24 11.3 0.75 0.07 0.5 26.3 1.00 0.29 24.0 0.86 0.15 0.0 100.0 0.74 0.63 100.0 0.71 0.42 Table 4: Unmasking performance on our test data at various confidence thresholds before and after obfuscation. Recall treats unclassified cases as false negatives. need at least 100 operations and an accumulated path cost decrease of up to 75%. Since the greedy obfuscation approach cannot choose among different operators, it must rely on the most effective one to achieve the obfuscation goal, incurring significant path costs. Given that both obfuscators employ adaptive thresholds, there are cases which do not require any (or only little) obfuscation, whereas others need more than 100. The latter are of particular interest, since it is here where heuristic obfuscation outperforms greedy obfuscation the most. 5.2 Obfuscation against Unmasking One of today’s most effective and robust verification approaches is unmasking by Koppel and Schler (2004). It decomposes to-be-compared texts into two chunk sets, and iteratively trains a linear classifier to discriminate between them while removing the most significant features in each iteration to measure the increased reconstruction error. This error increases faster for same-author cases since those share more function words than do differentauthors cases. Fooling unmasking verification provides us with evidence that our obfuscation technique works at a deeper level than just the few most superficial text features. Unmasking further produces curve plots of the declining classification accuracy, which render the effects of obfuscation accessible to human inspection and interpretation. Following Koppel and Schler, we use the chunk frequencies of the 250 most common words as features, determine classification accuracy by 10-fold cross validation using an SVM classifier, and remove ten features per iteration. The final curves and their gradients are used to train another SVM to separate curves originating from same-author cases from different-authors curves. Following the example of the PAN competitions where the incentive was to classify only high-confidence cases, we consider decisions for cases which can be classified with pre-determined confidence thresholds (i.e., the distance to the hyperplane), which allows to maximize precision at the cost of recall. Table 4 contrasts the performance of unmasking before and after obfuscation on the test data. With increasing confidence thresholds, between 19 % down to 11 % of the cases are decidable before obfuscation, decreased by a factor of 2 to 4 after obfuscation. On average, 205 trigrams were obfuscated; as little as about 3 % of a text. 5.3 Obfuscation against Compression Models Another verification approach that differs from traditional feature-engineering are compression-based models. We use the approach by Halvani et al. (2017), who recommend the compression-based cosine (CBC) by Sculley and Brodley (2006) calculated on the text pairs after compression with the PPMD algorithm (Howard, 1993). Figure 2 shows CBC values on a random selection of 20 exemplary same-author cases from our test dataset before and after obfuscation with the decision threshold highlighted. Quite impressively, almost none of the cases are classified correctly anymore after obfuscation. Overall, the accuracy drops from originally 71 % to 55 %, which is equivalent to random guessing. This strong effect can be explained as follows: Sculley and Brodley describe their metrics in terms of the Kolmogorov complexity, but the reason why natural language allows for very good compression ratios is its predictability (printed English has an entropy of at most 1.75 bits per character (Brown et al., 1992)). PPMD uses finite-order Markov language models for compression, which are effective at predicting characters in a sentence, but sensitive to increased entropy, which is the result of our obfuscation. 5.4 PAN Obfuscation Evaluation We further conducted an extensive evaluation of our obfuscation scheme against the top submissions to 1105 20 same-author-cases, randomly drawn Compression-based Cosine (CBC) 10 20 0.925 0.900 0.875 0.850 0.825 0.800 0.775 0.750 15 5 original CBC CBC after obfuscation classified as different authors Figure 2: CBC values of 20 PPMD-compressed sameauthor pairs before and after obfuscation up to the obfuscation threshold ε0.7. The classification threshold by which same-author and different-authors cases could be distinguished is highlighted in the top portion. the verification task at PAN 2013–2015 (Juola and Stamatatos, 2013; Stamatatos et al., 2014, 2015). The results are shown in Table 5. On all verifiers tested, we achieve an average AUC and C@1 reduction of around 10 and 6 percentage points on three of the corpora. With only minimal text modifications, this puts us in second place on the PAN13 and PAN15 corpora, and fourth on PAN14 Essays compared to other obfuscators submitted to PAN (Hagen et al., 2017). The PAN14 Novels corpus turns out to be the most challenging for our approach and there are multiple reasons for that. First, the texts are significantly longer. This makes it difficult to assess the overall obfuscation with a global measure like JS∆. As a result, only few sentences were actually obfuscated with most of the text left untouched. Insofar, we were surprised to see any significant effect at all (best individual result: 13 percentage points). To make matters worse, the flat search landscape spanned by our obfuscation operators leads to an increasing number of reopened states on these longer texts, greatly reducing the efficiency of the heuristic search. This reveals an important detail to explore in future work: obfuscation operations need to be distributed across the whole text and progress needs to be measured on smaller parts of it to ensure uniform obfuscation of everything and avoid obfuscation “hot spots”. Secondly, the number of “known” texts varies substantially, which demands more research into how we can calculate a minimal yet sufficient JS∆@L stopping criterion if a larger amount of known material is available. Thirdly, the corpus consists primarily of works by H. P. Lovecraft paired with fan fiction, which incurs unforeseeable global corpus features that verifiers can exploit, but which we do not consider for obfuscation. Lastly, we identify kocher15 as the most difficult verifier for us to obfuscate. Employing an impostor approach on the most frequent words, it was not the best-performing verifier in the first place, but proves most resilient against our “reductive” obfuscation, which tends to obfuscate only n-grams that are already rare for maximum effect. We expect that augmenting a reduction obfuscation with the previously-mentioned extension strategy will yield better results and an overall safer obfuscation. 5.5 Example of an Obfuscated Text Assessing the text quality in tasks that involve generation, such as translation, paraphrasing, and summarization, is still mostly manual work. Frequently used measures like ROUGE cannot be applied in the context of obfuscation, since our obfuscated texts are up to 97 % identical to their unobfuscated versions. This is why we resort to manually inspecting obfuscated texts and the changes made. Below is an excerpt of an original text along with the obfuscations applied to it. Selected trigrams are underlined, removed words are struck out, and inserted words are highlighted: ’It was the only chance we hadw ehad to win.’ Duke swallowed the idea slowly. He couldn’t picture a planetsatellite giving up its last protection for aphi desperate effort to end the war on purely offensive drive. Three billion people watching the home fleet take off, knowingdeciding the skies were openresort for all the hellmischief that a savage enemy could send! On Earth, the World Senate hadn’t permitted the building of one battleshipfrigate, for fear of reprisal. [...] Excerpt of Victory by Lester del Rey We selected an example where, by chance, different operators were applied in close vicinity. This “density” of operations is not representative. We can see both high- and low-quality replacements at work. Most can be attributed to the WordNet synonym operator. The replacement of “a” with “phi” is clearly such a case. The more suitable replacements originate from more context-dependent replacements, whereas “we had” →“w ehad” is a result of the flip operator. For comparison with related work, we carried out a human assessment of a few random obfusca1106 Verifier Unobfuscated Obfuscated Difference AUC C@1 FS AUC C@1 FS AUC C@1 FS a) PAN13 bagnall15 0.86 0.79 0.68 0.74 0.64 0.48 0.11 0.15 0.20 castillojuarez14 0.49 0.43 0.21 0.50 0.53 0.27 -0.02 -0.10 -0.06 castro15 0.93 0.77 0.71 0.87 0.73 0.64 0.06 0.03 0.08 frery14 0.62 0.57 0.35 0.37 0.40 0.15 0.25 0.17 0.20 khonji14 0.86 0.76 0.65 0.70 0.60 0.42 0.16 0.16 0.23 kocher15 0.75 0.64 0.48 0.77 0.65 0.50 -0.02 -0.01 -0.02 layton14 0.62 0.67 0.41 0.47 0.53 0.25 0.15 0.13 0.16 mezaruiz14 0.75 0.65 0.49 0.57 0.53 0.30 0.18 0.12 0.19 mezaruiz15 0.73 0.71 0.52 0.50 0.53 0.26 0.24 0.18 0.26 modaresi14 0.50 0.50 0.25 0.47 0.50 0.24 0.03 0.00 0.02 moreau14 0.77 0.62 0.48 0.61 0.51 0.32 0.16 0.11 0.17 moreau15 0.71 0.47 0.33 0.60 0.47 0.28 0.12 0.00 0.05 singh14 0.39 0.33 0.13 0.44 0.43 0.19 -0.06 -0.10 -0.06 zamani14 0.75 0.70 0.53 0.71 0.70 0.50 0.05 0.00 0.03 Average 0.10 0.06 0.10 b) PAN14 Essays bagnall15 0.57 0.55 0.31 0.43 0.45 0.19 0.14 0.10 0.12 castillojuarez14 0.55 0.58 0.32 0.55 0.58 0.32 0.00 0.00 0.00 castro15 0.62 0.59 0.36 0.51 0.53 0.27 0.11 0.05 0.09 frery14 0.72 0.71 0.51 0.68 0.68 0.46 0.04 0.03 0.05 khonji14 0.60 0.58 0.35 0.41 0.50 0.20 0.19 0.09 0.15 kocher15 0.63 0.59 0.37 0.61 0.57 0.35 0.02 0.02 0.02 layton14 0.59 0.61 0.36 0.51 0.53 0.27 0.08 0.08 0.09 mezaruiz14 0.57 0.56 0.32 0.49 0.51 0.25 0.08 0.04 0.07 mezaruiz15 0.52 0.52 0.27 0.32 0.37 0.12 0.21 0.16 0.16 modaresi14 0.60 0.58 0.35 0.57 0.57 0.32 0.04 0.01 0.03 moreau14 0.62 0.60 0.37 0.51 0.53 0.27 0.11 0.07 0.10 moreau15 0.57 0.52 0.30 0.50 0.51 0.26 0.07 0.01 0.04 singh14 0.70 0.66 0.46 0.61 0.61 0.37 0.09 0.04 0.08 zamani14 0.58 0.55 0.32 0.48 0.49 0.23 0.11 0.06 0.09 Average 0.09 0.05 0.08 Verifier Unobfuscated Obfuscated Difference AUC C@1 FS AUC C@1 FS AUC C@1 FS c) PAN14 Novels bagnall15 0.68 0.68 0.47 0.61 0.59 0.36 0.07 0.09 0.10 castillojuarez14 0.63 0.62 0.39 0.59 0.56 0.33 0.04 0.05 0.06 castro15 0.64 0.51 0.33 0.50 0.39 0.19 0.14 0.12 0.13 frery14 0.61 0.59 0.36 0.59 0.57 0.34 0.02 0.02 0.02 khonji14 0.75 0.61 0.46 0.71 0.58 0.41 0.04 0.03 0.05 kocher15 0.63 0.57 0.36 0.66 0.59 0.39 -0.03 -0.02 -0.03 layton14 0.51 0.51 0.26 0.50 0.50 0.25 0.01 0.01 0.01 mezaruiz14 0.66 0.61 0.41 0.64 0.62 0.40 0.02 0.00 0.01 mezaruiz15 0.56 0.51 0.28 0.57 0.51 0.29 -0.01 0.00 0.00 modaresi14 0.71 0.72 0.51 0.69 0.69 0.47 0.02 0.03 0.03 moreau14 0.60 0.52 0.31 0.56 0.51 0.29 0.04 0.01 0.03 moreau15 0.64 0.50 0.32 0.61 0.53 0.32 0.03 -0.03 0.00 singh14 0.66 0.58 0.38 0.63 0.56 0.35 0.03 0.02 0.03 zamani14 0.73 0.65 0.48 0.71 0.63 0.44 0.03 0.02 0.03 Average 0.03 0.02 0.03 d) PAN15 bagnall15 0.81 0.76 0.61 0.72 0.71 0.51 0.09 0.05 0.10 castillojuarez14 0.64 0.64 0.41 0.55 0.55 0.30 0.09 0.09 0.11 castro15 0.75 0.69 0.52 0.72 0.68 0.49 0.03 0.01 0.03 frery14 0.54 0.46 0.25 0.47 0.43 0.20 0.07 0.04 0.05 khonji14 0.82 0.65 0.53 0.59 0.49 0.49 0.23 0.16 0.24 kocher15 0.74 0.69 0.51 0.72 0.66 0.48 0.02 0.02 0.03 layton14 0.67 0.50 0.34 0.49 0.50 0.25 0.18 0.00 0.09 mezaruiz14 0.65 0.61 0.40 0.55 0.54 0.30 0.10 0.07 0.10 mezaruiz15 0.74 0.69 0.51 0.55 0.53 0.29 0.19 0.16 0.22 modaresi14 0.40 0.41 0.16 0.39 0.40 0.16 0.01 0.00 0.00 moreau14 0.66 0.58 0.38 0.52 0.49 0.25 0.14 0.09 0.13 moreau15 0.71 0.64 0.45 0.52 0.49 0.26 0.19 0.15 0.20 singh14 0.78 0.50 0.39 0.66 0.50 0.33 0.12 0.00 0.06 zamani14 0.74 0.67 0.50 0.71 0.66 0.47 0.04 0.00 0.03 Average 0.11 0.06 0.10 Table 5: Results of the top verifiers of PAN 2013–2015 before and after obfuscating the four task corpora. FS (Final Score) is the product of AUC and C@1. On average, we degrade AUC by at least 10 and C@1 by about 6 percentage points on three of the corpora, though much less on the PAN14 Novels corpus. Most noticeably, we can reduce the FS of bagnall15 (winning submission of PAN 2015) by 10–20 percentage points on all four corpora. The best obfuscation results on each corpus are marked bold. Verifiers that were improved are highlighted in red. tion samples as per the PAN obfuscation task. We achieved an overall grade of about 2.6 (1 = excellent, 5 = fail), which places us somewhere within the top three submissions. While the obfuscated text probably is not fit for publication, it does look promising even with our basic set of paraphrasing operators. The text was generated within a few minutes and passes the verifiers without being recognized as a same-author case. Texts from other cases look similar: a mixture of poor and good operations, where according to our own review about half of the changes made are still rather nonsensical. Since our set of operators is just a proof of concept, we will devise more sophisticated ones and better weighting schemes in future work, which is vital for achieving acceptable text quality. Promising approaches already exist, such as neural editing and paraphrasing (Grangier and Auli, 2017; Guu et al., 2017). 6 Conclusion We introduced a promising new paradigm for authorship obfuscation and implemented a first fully functional prototype. We identified and addressed the following challenges: measuring style similarity in a manner that is agnostic to state-of-the-art verifiers, identifying those parts of a text that have the highest impact on style, and devising and analyzing a search heuristic amenable for informed search. Our study opens up interesting avenues for future research: obfuscation by addition instead of by reduction, development of more powerful, targeted paraphrasing operators, and, theoretical analysis of the search space properties. We consider heuristic search-based obfuscation a key enabling technology that, combined with tailored deep generative models for paraphrasing, will yield better and stronger obfuscations. 1107 References Ahmed Abbasi and Hsinchun Chen. 2008. Writeprints: A stylometric approach to identity-level identification and similarity detection in cyberspace. ACM Trans. Inf. Syst., 26(2):7:1–7:29. Janek Bevendorff, Benno Stein, Matthias Hagen, and Martin Potthast. 2019. Bias Analysis and Mitigation in the Evaluation of Authorship Verification. In Proceedings of ACL 2019, (to appear). Edward Gaylord Bourne. 1897. The authorship of the federalist. The American Historical Review, 2(3):443–460. Michael Brennan, Sadia Afroz, and Rachel Greenstadt. 2012. Adversarial Stylometry: Circumventing Authorship Recognition to Preserve Privacy and Anonymity. ACM Trans. Inf. Syst. Secur., 15(3):12. Peter F. Brown, Stephen Della Pietra, Vincent J. Della Pietra, Jennifer C. Lai, and Robert L. Mercer. 1992. An estimate of an upper bound for the entropy of English. Computational Linguistics, 18(1):31–40. Dominik Maria Endres and Johannes E. Schindelin. 2003. A new metric for probability distributions. IEEE Trans. Information Theory, 49(7):1858–1860. David Grangier and Michael Auli. 2017. Quickedit: Editing text & translations via simple delete actions. arXiv, 1711.04805. Kelvin Guu, Tatsunori B. Hashimoto, Yonatan Oren, and Percy Liang. 2017. Generating sentences by editing prototypes. arXiv, 1709.08878. Matthias Hagen, Martin Potthast, and Benno Stein. 2017. Overview of the Author Obfuscation Task at PAN 2017: Safety Evaluation Revisited. In Working Notes Papers of the CLEF 2017 Evaluation Labs. Oren Halvani, Christian Winter, and Lukas Graner. 2017. Authorship verification based on compression-models. arXiv, 1706.00516. Paul G. Howard. 1993. The Design and Analysis of Efficient Lossless Data Compression Systems. Technical Report, CS-93-28, Brown University, 1993. Farkhund Iqbal, Rachid Hadjidj, Benjamin C.M. Fung, and Mourad Debbabi. 2008. A novel approach of mining write-prints for authorship attribution in email forensics. Digital Investigation, 5:S42–S51. Patrick Juola. 2006. Authorship Attribution. Foundations and Trends Information Retrieval, 1(3):233– 334. Patrick Juola and Efstathios Stamatatos. 2013. Overview of the Author Identification Task at PAN 2013. In CLEF 2013 Working Notes Papers. Gary Kacmarcik and Michael Gamon. 2006. Obfuscating Document Stylometry to Preserve Author Anonymity. In Proceedings of ACL 2006. Dmitry V. Khmelev and William John Teahan. 2003. A repetition based measure for verification of text collections and for text categorization. In Proceedings of SIGIR 2003, pages 104–110. Moshe Koppel and Jonathan Schler. 2004. Authorship Verification as a One-Class Classification Problem. In Proceedings of ICML 2004, pages 1–7. Andrew W. E. McDonald, Sadia Afroz, Aylin Caliskan, Ariel Stolerman, and Rachel Greenstadt. 2012. Use Fewer Instances of the Letter "i": Toward Writing Style Anonymization. In Proceedings of PETS 2012, pages 299–318. George A. Miller. 1995. Wordnet: A lexical database for english. Commun. ACM, 38(11):39–41. Arvind Narayanan, Hristo S. Paskov, Neil Zhenqiang Gong, John Bethencourt, Emil Stefanov, Eui Chul Richard Shin, and Dawn Song. 2012. On the feasibility of Internet-scale author identification. In Proceedings of SP 2012, pages 300–314. Judea Pearl. 1984. Heuristics - intelligent search strategies for computer problem solving. Addison-Wesley series in artificial intelligence. Martin Potthast, Matthias Hagen, and Benno Stein. 2016. Author Obfuscation: Attacking the State of the Art in Authorship Verification. In Working Notes Papers of the CLEF 2016 Evaluation Labs. Martin Potthast, Felix Schremmer, Matthias Hagen, and Benno Stein. 2018. Overview of the Author Obfuscation Task at PAN 2018: A New Approach to Measuring Safety. In Working Notes Papers of the CLEF 2018 Evaluation Labs. Josyula R. Rao and Pankaj Rohatgi. 2000. Can Pseudonymity Really Guarantee Privacy? In Proceedings of USENIX 2000. D. Sculley and Carla E. Brodley. 2006. Compression and machine learning: A new perspective on feature space vectors. In Proceedings of DCC 2006, pages 332–332. Efstathios Stamatatos. 2009. A Survey of Modern Authorship Attribution Methods. Journal of the American Society for Information Science and Technology, 60(3):538–556. Efstathios Stamatatos, Walter Daelemans, Ben Verhoeven, Patrick Juola, Aurelio López López, Martin Potthast, and Benno Stein. 2015. Overview of the Author Identification Task at PAN 2015. In CLEF 2015 Working Notes Papers. Efstathios Stamatatos, Walter Daelemans, Ben Verhoeven, Martin Potthast, Benno Stein, Patrick Juola, Miguel A. Sanchez-Perez, and Alberto Barrón-Cedeño. 2014. Overview of the Author Identification Task at PAN 2014. In CLEF 2014 Working Notes Papers. 1108 Benno Stein, Matthias Hagen, and Christof Bräutigam. 2014. Generating Acrostics via Paraphrasing and Heuristic Search. In Proceedings of COLING 2014, pages 2018–2029. Benno Stein, Martin Potthast, and Martin Trenkmann. 2010. Retrieving Customary Web Language to Assist Writers. In Proceedings of ECIR 2010, pages 631–635. William J Teahan and David J Harper. 2003. Using compression-based language models for text categorization. In Language modeling for information retrieval, pages 141–165. Wei Xu, Alan Ritter, Bill Dolan, Ralph Grishman, and Colin Cherry. 2012. Paraphrasing for style. In Proceedings of COLING 2012, pages 2899–2914. Ying Zhao, Justin Zobel, and Phil Vines. 2006. Using Relative Entropy for Authorship Attribution. In Proceedings of AIRS 2006, pages 92–105. Rong Zheng, Jiexun Li, Hsinchun Chen, and Zan Huang. 2006. A framework for authorship identification of online messages: Writing-style features and classification techniques. Journal of the American Society for Information Science and Technology, 57(3):378–393.
2019
104
Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 1109–1119 Florence, Italy, July 28 - August 2, 2019. c⃝2019 Association for Computational Linguistics 1109 Text Categorization by Learning Predominant Sense of Words as Auxiliary Task Kazuya Shimura1, Jiyi Li2,3 and Fumiyo Fukumoto2 Graduate School of Engineering, University of Yamanashi1 Interdisciplinary Graduate School, University of Yamanashi2 4-3-11, Takeda, Kofu, 400-8511 Japan RIKEN AIP3, Tokyo, 103-0027 Japan {g17tk008,jyli,fukumoto}@yamanashi.ac.jp Abstract Distributions of the senses of words are often highly skewed and give a strong influence of the domain in a document. This paper follows the assumption and presents a method for text categorization by leveraging the predominant sense of words depending on the domain, i.e., domain-specific senses. The key idea is that the features learned from predominant senses are possible to discriminate the domain of the document and thus improve the overall performance of text categorization. We propose a multi-task learning framework based on the neural network model, transformer, which trains a model to simultaneously categorize documents and predicts a predominant sense for each word. The experimental results using four benchmark datasets including RCV1 show that our method is comparable to the state-of-the-art categorization approach, especially our model works well for categorization of multi-label documents. 1 Introduction Text categorization has been intensively studied since neural network methods have attracted much attention. Most of the previous work on text categorization relies on the use of representation learning where the words are mapped to an implicit semantic space (Wang et al., 2015; Liu et al., 2017a). The Word2Vec is a typical model related to this representation (Mikolov et al., 2013). It learns a vector representation for each word and captures semantic information between words. Pre-training by using the model shows that it improves overall performance in many NLP tasks including text categorization. However, the drawback in the implicit representation is that it often does not work well on polysemous words. The sense of a word depends on the domain in which it is used. The same word can be used differently in different domains. Distributions of the senses of words are often highly skewed and a predominant sense of a word depends on the domain of a document (McCarthy et al., 2007; Jin et al., 2009). Suppose the noun word, “court”. The predominant sense of a word “court” would be different in the documents from the “judge/law” and “sports” domains as the sense of the former would be “an assembly (including one or more judges) to conduct judicial business” and the latter is “a specially marked horizontal area within which a game is played” described in the WordNet 3.1. This indicates that the meaning becomes a strong clue to assign a domain to the document. However, in the implicit semantic space created by using the neural language model such as the Word2Vec, a word is represented as one vector even if it has several senses. It is often the case that a word which is polysemous is not polysemous in a restricted subject domain. A restriction of the subject domain makes the problem of polysemy less problematic. However, even in texts from a restricted subject domain such as Wall Street Journal corpus (Douglas and Janet, 1992), one encounters quite a large number of polysemous words. Several authors focused on the problem and proposed a new type of deep contextualized word representation such as ELMo (Peters et al., 2018) and BERT (Devlin et al., 2018) that models not only syntax but also semantics including polysemies. Their methods work very well in many NLP tasks such as question answering and sentiment analysis, while their methods are unsupervised manners which they do not explicitly map each sense of a word to its domain. Motivated by solving this problem, we propose a method for text categorization that complements implicit representation by leveraging the predominant sense of a word. We propose a multi-task learning method based on the encoder structure of the neural network 1110 model, transformer (Vaswani et al., 2017). The transformer works by relying on a self-attention mechanism. It can directly capture the relationships between two words regardless of their distance which is effective for detecting features to discriminate predominant sense of a word in the domain. In the model using multi-task learning, the auxiliary predominant sense prediction task helps text categorization by learning common feature representation of predominant senses for text categorization. The model adopts a multi-task objective function and is trained to simultaneously categorize texts and predicts a predominant sense for each word. In such a way, the predominant sense information can also help the model to learn better sense/document representations. The experimental results using four benchmark datasets support our conjecture that predominant sense identification helps to improve the overall performance of the text categorization task. The main contributions of our work can be summarized: (1) We propose a method for text categorization that complements implicit representation by leveraging a predominant sense of a word. (2) We introduce a multi-task learning framework based on the neural network model, transformer. (3) We show our hypothesis that predominant sense identification helps to improve the overall performance of the text categorization task, especially our model is effective for categorization of documents with multi-label. 2 Text Categorization Framework Our multi-task learning framework for predominant sense prediction and text categorization is illustrated in Figure 1. 2.1 Text Matrix by the Transformer Encoder As shown in Figure 1, we use the transformer encoder to represent the text matrix (Vaswani et al., 2017). It is based on self-attention networks and each word is connected to any other word in the same sentence via self-attention which makes it possible to get rich information to predict domainspecific senses. The encoder e typically stacks six identical layers. Each layer uses the multi-head attention and two sub-layers feed-forward network, combined with layer normalization and residual connection. For each word within a sentence, including the word itself, the multi-head attention computes at/ ul u}vÇ š šZ vl /v‰µš} vl9íWíðWììWW }v}uÇ WŒ]š•v• šP}ŒÇoo dŒµ}u]vr •‰](]•v• o}•• dŒµšP}ŒÇ o}•• l‰Œ}‰ &•• ul9îWðìWìíWW dŒv•(}ŒuŒ v}Œ ^v• WŒ]š]}v / ul u}vÇ š šZ vl dÆš šP}Œ]Ìš]}v / ul9îWðìWìíWW u}vÇ š šZ vl9íWíðWììWW DšŒ( D•• &š Figure 1: Multi-task Learning for Predominant Sense Prediction and Text Categorization: “make” and “bank” marked with red show the target word. “make%2:40:01::” and “bank%1:14:00::” show sense index obtained by the WordNet 2.0 and indicate the predominant sense of “make” and “bank” in the economy domain, respectively. tention weights, i.e., a softmax distribution shown in Eq. (1). attention(Q, K, V) = softmax(QKT √dk )V. (1) The input are queries Q, keys K of dimension dk, and values V of dimension dv. √dk refers to scaling factor. The inputs are linearly projected h times, in order to allow the model to jointly attend to information from different representation, concatinating the result, multiHead(Q, K, V) = Concat(head1, · · · , headh)WO, where headi = attention(QWQ i , KWK i , VWV i ). (2) with parameter matrices WQ i ∈Rdmodel×dk, WK i ∈Rdmodel×dk, WV i ∈Rdmodel×dv and WO ∈ Rhdv×dmodel. Here, dmodel refers to the dimension of a word vector. Let the output of multiHead(Q, K, V) be Mattn. On top of the multi-head attention, there is a feed-forward network that consists of two layers with a ReLU activation. Each encoder layer takes the output of the previous layer as input. It allows making attention to all positions of the previous layer. We obtain the output matrix Mtrf shown in Figure 1 as an output of the encoder of the transformer. 1111 2.2 Domain-Specific Sense Prediction Each target word vector, i.e., the word which should be assigned a domain is extracted from the matrix Mtrf and passed to the fully connected layer FCdss. In Figure 1, “make” and “bank” denote the target words. The weighted matrix of FCdss is indicated as Wdss ∈Rdmodel×ddss where ddss is the number of the dimensions in the output which is equal to the number of domain-specific senses in all of the target words. The predicted sense vector y(dss) is obtained as below: y(dss) = softmax(Mtrf · Wdss). (3) We compute loss function by using y(dss) and its true domain-specific sense vector t(dss) which is represented as a one-hot vector. The loss function is defined by Eq. (4). Ldss(θ) =      − 1 ndss ∑n i=1 ∑nw w=1 ∑ddss s=1 t(dss) iws log(y(dss) iws ) (ndss ≥1), 0 (ndss = 0). (4) n refers to the minibatch size and nw shows the number of words in a document. ndss is the number of target words within the minibatch size and θ refers to the parameter used in the network. t(dss) iws and y(dss) iws show the value of the s-th domainspecific sense for the w-th target word in the i-th document within the minibatch size and its true value (1 or 0), respectively. As shown in Figure 1, we obtain text matrix Mdss by replacing each target vector (“make” and “bank”) in the matrix Mtrf to its domain-specific sense vector (“make%2:40:01::” and “bank%1:14:00::”). 2.3 Text Categorization We merged all the vectors of the matrix Mdss per dimension and obtained one document vector Dsum. We passed it to the fully connected layers FCtc. The number of the dimensions of the output vector dtc obtained by FCtc equals to the total number of domains. Let the prediction vector y(tc) be Wtc × Dsum where Wtc ∈Rdmodel×dtc indicates the weight matrix of FCtc. We applied softmax function for single label categorization task which is defined by: ˆp(tc) ic = exp(y(tc) ic ) ∑dtc c′=1 exp(y(tc′) ic ) (5) Similarly, we used a sigmoid function σ(x) = 1 1+e−x for multi-label categorization problem. The training objective is to minimize the following loss: Ltc(θ) =              −1 n ∑n i=1 ∑dtc c=1 t(tc) ic log(ˆp(tc) ic ). Single-label −1 n ∑n i=1 ∑dtc c=1[t(tc) ic log(σ(y(tc) ic ))+ (1 −t(tc) ic ) log(1 −σ(y(tc) ic ))]. Multi-label (6) Single-label and Multi-label in Eq. (6) denote the loss function for single-label and multi-label prediction, respectively. n refers to the minibatch size and θ shows parameter used in the network. t(tc) ic and y(tc) ic show the value of the c-th domain in the i-th document within the minibatch size and its true value (1 or 0), respectively. In case of a single domain, a domain whose probability score is the maximum is regarded to the predicted domain. When the test data is the multi-label problem, we set a threshold value λ and domains whose probability score exceeds the threshold value are considered for selection. 2.4 Multi-task Learning We assume that the auxiliary predominant sense prediction task helps the text categorization task by learning common feature representation of predominant senses for text categorization. The model adopts a multi-task objective function which is shown in Eq. (7). It is trained to simultaneously categorize texts and predicts a predominant sense for each word. L(multi)(θ(sh), θ(dss), θ(tc)) = L(dss)(θ(sh), θ(dss)) +L(tc)(θ(sh), θ(tc)) (7) θ(sh) in Eq. (7) refers to a shared parameter of the two tasks. θ(dss) and θ(tc) stand for a parameter estimated in domain-specific sense prediction and that of text categorization, respectively. Given a corpus, the parameters of the network are trained to minimize the value obtained by Eq. (7). 3 Experiments 3.1 Dataset We performed the experiments on four benchmark datasets having domains to evaluate the properties 1112 SFC RCV1 Arts Arts, Entertainment Science Science Politics Politics Economy Economics Sports Sports Weather Weather Politics Government Industry Corporate Law Law Environment Environment Tourism Travel Military War Commerce Market Table 1: SFC and RCV1 correspondences SFC APW Arts Entertainment Politics Politics Economy Financial Sports Sports Weather Weather Table 2: SFC and APW(AQUAINT) correspondences of our framework: RCV1 (Lewis et al., 2004), 20 Newsgroups1, 1999 APW2 from the AQUAINT corpus3, and AG’s corpus of news articles4. The data for domain-specific sense prediction is based on the senses provided by the allwords task in SensEval-2 (Palmer et al., 2001) and SensEval-3 (Snyder and Palmer, 2004). Magnini et al (Magnini and Cavaglia, 2000; Magnini et al., 2002) created a lexical resource where WordNet 2.0 synsets were annotated with Subject Field Codes (SFC). Especially, 96% of WordNet synsets for nouns are annotated. We assigned each domain described in their SFC list to the sense of the all-words task in SensEval-2 and SensEval-3 data. Moreover, we assigned SFC labels to four benchmark datasets having domains. The SFC consists of 115,424 words assigning 168 domain labels which include some of the four datasets’ domains. We manually corresponded these domains to SFC labels which are shown in Tables 1, 2, 3 5, and 4. The dataset statistics are summarized in Table 5 and examples of domain-specific sense-tagged 1http://people.csail.mit.edu/jrennie/20Newsgroups/ 2We did not use 1998 and 2000 APW as the domains are not assigned to these data. 3http://catalog.ldc.upenn.edu/LDC2002T31 4https://github.com/mhjabreel/ CharCnn Keras/tree/master/data/ag news csv 5{“autos”, “motorcycles”}, and “sport” are assigned to different SFC labels. However, we followed 20News categorization and grouped into one. SFC 20News Arts Rec.autos, Rec.motercycles Rec.sport.baseball, Rec.sport.hockey Science Sci.crypt, Sci.electronics, Sci.med, Sci.space Politics Talk.politics.mis, Talk.politics.guns Talk.politics.mideast Table 3: SFC and 20News correspondences: 20News contains seven top categories. Of these, we used three, each of which corresponds to SFC. SFC AG Arts Entertainment Science Science Sports Sports Table 4: SFC and AG correspondences data are shown in Table 6. RCV1 consists of 806,701 documents, one-year corpus from Aug 20th, 1996 to Aug 19th, 1997. RCV1 is a large volume of data compared to the other three data. We thus reserved eight months of the RCV1 data to learn word-embedding model. The model is also used for the other three datasets because they are the same genre as the RCV1, news stories. We divided the remaining data into three. The division is the same as the other three datasets: we reserved 60% of the data to train the models, 20% of the data is used for tuning hyperparameters, and the remaining 20% is used to test the models. All the documents are tagged by using Stanford CoreNLP Toolkit (Manning et al., 2014). 3.2 Baselines We compared our method to three baseline methods: (i) TRF-Single which is a text categorization based on the transformer but without domainspecific sense prediction, (ii) TRF-Sequential, a method first predicts domain-specific senses and then classify documents by using the result, and (iii) TRF-Delay-Multi, which is a model to start learning predominant sense model at first until the stable, and after that it adapts text categorization simultaneously. This is a mixed method of TRFSequential with fully separated training and TRFMulti with fully simultaneously training. We compared our method with these approaches. For multi-label text categorization by using RCV1 data, we chose XML-CNN as a baseline method because their method is simple but powerful and attained at the best or second best compared to the seven existing methods including Bow-CNN (Johnson and Zhang, 2015) on six 1113 Datasets N D L W S ˆS M ˆ M RCV1 502,383 13 2.4 565 992 3,800,197 38,645 3,831 APW 46,032 5 1 397 586 877,400 9,206 1,497 20News 10,228 3 1 404 563 46,410 3,409 82 AG 95,700 3 1 390 562 124,885 31,900 222 Table 5: Data Statistics: N is the number of documents, D shows the number of domains, L is the average number of domains per document, W refers to the number of different target words, S is the number of different target senses, and ˆS denotes the total number of target senses in the documents, M shows the average number of documents per domain, and ˆ M is the average number of documents per target sense. Domain Document Arts jonathan think there be a earlier russian film movie%1:10:00:: on tv just say it be base on a gogol . Science the usaf of this program%1:10:02:: be very open to ssato and will about 50m next year for study%1:09:03:: . Politics i do not think the suffering of some jew during wwius justify the commit by the israeli government%1:14:00:: . Table 6: Sense-tagged training data (20News): Words marked with “%” indicates sense index obtained by the WordNet 2.0. Each word is lemmatized by using CoreNLP-Toolkit. Hyperparameter Value The # of dimensions of a word vector (dmodel) 100 The # of epoch 100 Minibatch sizes (n) 32 Activation function ReLu Threshold value for Multi-label learning (λ) 0.5 Gradient descent Adam Table 7: Model settings: The hyperparameters commonly used in all of the method. benchmark datasets where the label-set sizes are up to 670K (Liu et al., 2017a). Original XMLCNN is implemented by using Theano,6 while we implemented our method by Chainer.7 To avoid the influence of the difference in libraries, we implemented XML-CNN by Chainer and used it as a baseline. We followed the author-provided implementation in our Chainer’s version of XMLCNN. To make a fair comparison, we used fastText (Joulin et al., 2017) as a word-embedding tool with all of the methods. 3.3 Model settings and evaluation metrics The hyperparameters which are commonly used in all of the methods and their own estimated hyperparameters are shown in Tables 7 and 8, respec6https://drive.google.com/file/d/1Wwy!MNkrJRXZM3WN ZNywa94c2-iEh 6U/view 7https://chainer.org tively8. These hyperparameters are optimized by using a hyperparameter optimization framework called Optuna9. They were independently determined for each dataset. In the experiments, we run five times for each model and obtained the averaged performance. We used standard recall, precision, and F1 measures. We further computed Macro-averaged F1 and Micro-averaged F1 and used them through the experiments. 3.4 Results The performance of all methods in Microaveraged F1 and Macro-averaged F1 on four datasets are summarized in Tables 9, and 10, respectively. Overall, both Micro and Macroaveraged F1 obtained by each method were very high except for the RCV1 data. Because these datasets consist of at most five domains and a single-label problem. The Micro and Macro-F1 obtained by TRF-Single were better than those obtained by XML-CNN except for APW corpus. This shows that text categorization based on the encoder of the transformer is effective for categorization. Sequential learning does not work well for text categorization. Because the average Macro-F1 obtained by TRF-Sequential (89.41%) was slightly worse than that of TRFSingle (89.74%), while Micro-averaged F1 obtained by TRF-Sequential (90.02%) was slightly better than TRF-Single (89.89%). TRF-Delay-Multi was worse than TRFSequential. Especially, as shown in Tables 9 and 10, the results in RCV1 were worse than TRF-Single. One possible reason for the result is that predominant sense identification is more difficult task compared with text categorization. As shown in Table 5, for example, in RCV1, the average number of documents per target 8Our source code including Chainer’s version of XML-CNN is available at: https://github.com/ShimShim46/TRF Multitask 9https://github.com/pfnet/optuna 1114 Data XML-CNN TRF-Single TRF-Seq, TRF-Delay TRF-Multi fr f wd h e wd h e wd ep h e wd RCV1 2, 3, 4 128 1.00×10−4 10 1 1.00×10−4 10 2 1.00×10−4 75 10 1 1.00×10−4 APW 1, 2, 3 256 1.18×10−10 10 1 8.77×10−4 10 1 4.39×10−4 100 10 1 3.60×10−6 20News 4, 5, 6 128 3.05×10−4 5 1 1.42×10−10 5 1 9.08×10−8 75 10 1 4.39×10−8 AG 3, 4, 5 256 4.15×10−4 10 3 6.50×10−4 10 2 2.00×10−4 25 10 1 1.59×10−6 Table 8: Model settings for each method: “TRF-Seq.” and “TRF-Delay” show TRF-Sequential and TRF-DelayMulti, respectively. “fr” refers to filter region and “f” shows Filters. “wd” indicates Weight Decay. “h” shows multi-attention layers and “e” is a stack of encoders. “ep” refers to the number of epochs in the predominant sense prediction used in the baseline (iii). For instance, 75 indicates that we run predominant sense prediction task until 75 epochs, and then run multi-task learning. Methods Datasets XML-CNN TRF-Single TRF-Sequential TRF-Delay-Multi TRF-Multi RCV1 70.01 70.30 70.43 62.43 71.92 APW 98.96∗ 98.23 98.53 98.80∗ 99.34 20News 88.39 91.51 91.62 91.93∗ 92.87 AG 99.07 99.52∗ 99.52∗ 99.73∗ 99.82 Average 89.10 89.89 90.02 88.22 90.98 Table 9: Micro-averaged F1 (%): Bold font shows the best result with each line. The method marked with “∗” indicates the score is not statistically significant compared to the best one. We used a t-test, p-value < 0.05. Methods Datasets XML-CNN TRF-Single TRF-Sequential TRF-Delay-Multi TRF-Multi RCV1 56.59 70.03 68.52 62.43 71.82 APW 98.19 97.13 97.70 98.05 99.14 20News 88.04 92.72 91.94 91.60 92.62∗ AG 96.61 99.08 99.51∗ 99.38∗ 99.64 Average 84.85 89.74 89.41 87.86 90.80 Table 10: Macro-averaged F1 (%): Bold font shows the best result with each line. The method marked with “∗” indicates the score is not statistically significant compared to the best one. We used a t-test, p-value < 0.05. Datasets TRF-Seq. TRF-Multi TRF-Delay RCV1 92.38 97.91 APW 95.51 98.82 20News 84.44 86.64 AG 91.26 92.03∗ Average 90.90 93.85 Table 11: Micro-averaged F1(%) of predominant sense prediction: The method marked with “∗” indicates the score is not statistically significant compared to the best one. We used a t-test, p-value < 0.05. sense is 3,831, while the average number of documents per domain is 38,645. The training data for predominant senses is smaller than that of text categorization, which causes the overfitting problem. As a result, TRF-Delay-Multi does not work well and even worse than TRF-Single. This shows that separately learning predominant sense model at first until the stable, and after that, learning predominant sense prediction and text categorization simultaneously did not improve the overall performance. Datasets TRF-Seq. TRF-Multi TRF-Delay RCV1 78.84 83.32 APW 75.38 79.70 20News 70.13 72.76 AG 77.54 80.73 Average 75.47 78.88 Table 12: Macro-averaged F1(%) of predominant sense prediction Overall, the results obtained by TRF-Multi were the best among them by both Micro and Macroaveraged F1. This indicates that the predominant sense information through multi-task learning can help the model to learn better sense/document representations. On RCV1, the overall performance in each method was worse than those obtained by using other data as the categorization task is more difficult task compared with other data, i.e., multilabel problem. However, TRF-Multi is still better than other methods. The improvement was 1.49% ∼9.49% by Micro-F1 and 1.79% ∼15.23% by Macro-F1. 1115 1 10 20 30 40 50 60 70 80 90 100 Epoch 0.4 0.5 0.6 0.7 0.8 0.9 1.0 Micro Fscore XML-CNN TRF-Single TRF-Sequential TRF-Multi TRF-Delay-Multi (a) Micro-F1 (RCV1) 1 10 20 30 40 50 60 70 80 90 100 Epoch 0.4 0.5 0.6 0.7 0.8 0.9 1.0 Micro Fscore XML-CNN TRF-Single TRF-Sequential TRF-Multi TRF-Delay-Multi (b) Micro-F1 (APW) 1 10 20 30 40 50 60 70 80 90 100 Epoch 0.4 0.5 0.6 0.7 0.8 0.9 1.0 Micro Fscore XML-CNN TRF-Single TRF-Sequential TRF-Multi TRF-Delay-Multi (c) Micro-F1 (20News) 1 10 20 30 40 50 60 70 80 90 100 Epoch 0.4 0.5 0.6 0.7 0.8 0.9 1.0 Micro Fscore XML-CNN TRF-Single TRF-Sequential TRF-Multi TRF-Delay-Multi (d) Micro-F1 (AG) Figure 2: Micro-F1 against the # of epochs obtained by using the test data: Multi-task learning stability. 1 10 20 30 40 50 60 70 80 90 100 Epoch 0.4 0.5 0.6 0.7 0.8 0.9 1.0 Macro Fscore XML-CNN TRF-Single TRF-Sequential TRF-Multi TRF-Delay-Multi (a) Macro-F1 (RCV1) 1 10 20 30 40 50 60 70 80 90 100 Epoch 0.4 0.5 0.6 0.7 0.8 0.9 1.0 Macro Fscore XML-CNN TRF-Single TRF-Sequential TRF-Multi TRF-Delay-Multi (b) Macro-F1 (APW) 1 10 20 30 40 50 60 70 80 90 100 Epoch 0.4 0.5 0.6 0.7 0.8 0.9 1.0 Macro Fscore XML-CNN TRF-Single TRF-Sequential TRF-Multi TRF-Delay-Multi (c) Macro-F1 (20News) 1 10 20 30 40 50 60 70 80 90 100 Epoch 0.4 0.5 0.6 0.7 0.8 0.9 1.0 Macro Fscore XML-CNN TRF-Single TRF-Sequential TRF-Multi TRF-Delay-Multi (d) Macro-F1 (AG) Figure 3: Macro-F1 against the # of epochs obtained by using the test data: Multi-task learning stability. Tables 11 and 12 show the Micro and MacroF1 of predominant sense prediction, respectively. The overall performance of multi-task learning was better to those of TRF-Seq. (TRF-Delay) by both measures except for Micro-F1 on AG data. This confirms our conjecture: to train the data in order to simultaneously categorize texts and predict domain-specific senses is effective for sense prediction. Figures 2 and 3 show Micro and Macro-F1 against the number of epochs by using each of the four datasets. As we can see from these Figures, on 20News and AG corpus, each model except for XML-CNN are similar learning stability in both Micro and Macro-F1 curves. On RCV1, we have the same observation by Micro-F1 except for TRF-Delay-Multi and there is no significant difference in stability between TRF-Multi and TRFSequential by Macro-F1. On APW, TRF-Multi is similar to XML-CNN as they are stable after 60 epochs. In summary, TRF-Multi gets more stable through the datasets and in both measures. We also examined the affection on each categorization performance by the ratio of predominantsense tagged training data. For each domain and each predominant-sense, we count the total number of documents and obtained 5% to 80% of the training documents. The results by Micro and Macro-F1 are illustrated in Figures 4, and 5, respectively. The Micro-F1 values except for 20News and for TRF-Delay-Multi on RCV1 are not a significant difference among methods and keep the performance until the ratio of training data decreased at 40%. Similarly, when the ratio is larger than 20%, the Macro-F1 on APW and AG obtained by all the methods do not differ significantly except for XML-CNN. The Micro and Macro-F1 curves obtained by 20news and Macro-F1 curve on RCV1 shows that more training data helps the performance. This is reasonable because the average number of training data per domain on 20news is 3,409 and it is extremely smaller than other datasets. RCV1 is also a multi-label problem. The curves obtained by TRF-Multi drop slowly compared to other methods and it keeps the best performance by both evaluation measures and even in the ratio of 5%. From the observations, 1116 5 10 20 40 60 80 Ratio of Training Data [%] 0.65 0.70 0.75 0.80 0.85 0.90 0.95 1.00 Micro Fscore XML-CNN TRF-Single TRF-Sequential TRF-Multi TRF-Delay-Multi (a) Micro-F1 (RCV1) 5 10 20 40 60 80 Ratio of Training Data [%] 0.65 0.70 0.75 0.80 0.85 0.90 0.95 1.00 Micro Fscore XML-CNN TRF-Single TRF-Sequential TRF-Multi TRF-Delay-Multi (b) Micro-F1 (APW) 5 10 20 40 60 80 Ratio of Training Data [%] 0.65 0.70 0.75 0.80 0.85 0.90 0.95 1.00 Micro Fscore XML-CNN TRF-Single TRF-Sequential TRF-Multi TRF-Delay-Multi (c) Micro-F1 (20News) 5 10 20 40 60 80 Ratio of Training Data [%] 0.65 0.70 0.75 0.80 0.85 0.90 0.95 1.00 Micro Fscore XML-CNN TRF-Single TRF-Sequential TRF-Multi TRF-Delay-Multi (d) Micro-F1 (AG) Figure 4: Micro-F1 against the ratio of training data 5 10 20 40 60 80 Ratio of Training Data [%] 0.4 0.5 0.6 0.7 0.8 0.9 1.0 Macro Fscore XML-CNN TRF-Single TRF-Sequential TRF-Multi TRF-Delay-Multi (a) Macro-F1 (RCV1) 5 10 20 40 60 80 Ratio of Training Data [%] 0.4 0.5 0.6 0.7 0.8 0.9 1.0 Macro Fscore XML-CNN TRF-Single TRF-Sequential TRF-Multi TRF-Delay-Multi (b) Macro-F1 (APW) 5 10 20 40 60 80 Ratio of Training Data [%] 0.4 0.5 0.6 0.7 0.8 0.9 1.0 Macro Fscore XML-CNN TRF-Single TRF-Sequential TRF-Multi TRF-Delay-Multi (c) Macro-F1 (20News) 5 10 20 40 60 80 Ratio of Training Data [%] 0.4 0.5 0.6 0.7 0.8 0.9 1.0 Macro Fscore XML-CNN TRF-Single TRF-Sequential TRF-Multi TRF-Delay-Multi (d) Macro-F1 (AG) Figure 5: Macro-F1 against the ratio of training data we can conclude that TRF-Multi learning model works well, especially in the cases that the number of training data per domain is small. 4 Related Work Deep learning techniques have been great successes for automatically extracting contextsensitive features from a textual corpus. Many authors have attempted to apply deep learning methods including CNN (Kim, 2014; Zhang et al., 2015; Wang et al., 2015; Zhang and Wallace, 2015; Zhang et al., 2017; Wang et al., 2017), the attention based CNN (Yang et al., 2016), bag-of-words based CNN (Johnson and Zhang, 2015), and the combination of CNN and recurrent neural network (RNN) (Zhang et al., 2016) to text categorization. Most of these approaches demonstrated that neural network models are powerful for learning effective features from textual input. However, most of them for learning word vectors only allow a single context-independent representation for each word even if it has several senses. Peters et al. addressed the issue and proposed a model of deep contextualized word representation called ELMo derived from a bidirectional LSTM (Peters et al., 2018). They reported that their representation model significantly improves the state-of-the-art across six NLP problems. Similarly, Devlin proposed a model of deep contextualized word representation called BERT that can deal with syntax and semantics including polysemies (Devlin et al., 2018). Their methods attained amazing results in many NLP tasks. However, they do not explicitly map each sense of a word to its domain as their methods are unsupervised manner. Moreover, their model needs a large amount of corpus which leads to computational workload. Our model utilizes existing domain-specific senses (Magnini and Cavaglia, 2000; Magnini et al., 2002) as pseudo rough but explicit word representation data. It enables us to learn feature representations for both predominant senses and text categorization with a small amount of data. Similar to the text categorization task, the recent upsurge of deep learning techniques have also contributed to improving the overall performance on Word Sense Disambiguation (WSD) (Yuan et al., 1117 2016; Raganato et al., 2017; Peters et al., 2018). Melamud et al. proposed a method called Context2Vec which learns each sense annotation in the training data by using a bidirectional LSTM trained on an unlabeled corpus (Melamud et al., 2016). More recently, Vaswani et al. introduced the first full-attentional architecture called Transformer. It utilizes only the self-attention mechanism and demonstrated its effectiveness on neural machine translation. Since then, the transformer has been successfully applied to many NLP tasks including semantic role labeling (Strubell et al., 2018) and sentiment analysis (Ambartsoumian and Popowich, 2018). To the best of our knowledge, this is the first approach for predicting domain-specific senses based on a transformer that is trained with multi-task learning. In the context of predominant sense prediction, several authors have attempted to use domainspecific knowledge to disambiguate senses and show that the knowledge outperforms generic supervised WSD (Agirre and Soroa, 2009; Faralli and Navigli, 2012; Taghipour and Ng, 2015). McCarthy et al. proposed a statistical method for assigning predominant noun senses (McCarthy et al., 2004, 2007). They find words with a similar distribution to the target word from parsed data. They tested 38 words containing two domains of Sports and Finance from the Reuters corpus (Rose et al., 2002). Similarly, Lau et al. (2014) proposed a fully unsupervised topic modeling-based approach to sense frequency estimation. Faralli and Navigli (2012) attempted to performing domain-driven WSD by a pattern-based method with minimally-supervised framework. While conceptually similar, our model differs from these approaches in that it is supervised learning by adopting existing domain-specific sense tags for creating the data. In the context of multi-task learning, many authors have attempted to apply it to NLP tasks (Collobert and Weston, 2008; Glorot et al., 2011; Liu et al., 2015, 2016). Liu et al. proposed adversarial multi-task learning which alleviates the shared and private latent feature spaces from interfering with each other (Liu et al., 2017b). Xiao et al. attempted multi-task CNN which introduces a gate mechanism to reduce the interference (Xiao et al., 2018). They reported that their approach can learn selection rules automatically and gain a great improvement over baselines through the experiments on nine text categorization datasets. Both of them focused on text categorization task only as a multi-task and used the word embeddings which are initialized with Word2Vec or GloVe vectors. Aiming at text categorization with relatively small amounts of training data, we demonstrated a predominant sense of a word is effective for text categorization in the framework of multi-task learning with domainspecific sense identification and text categorization. This enabled us to obtain better explicit feature representations to classify documents. 5 Conclusion We have presented an approach to text categorization by leveraging a predominant sense of a word depending on the domain. We empirically examined that predominant sense identification helps to improve the overall performance of text categorization in the framework on multi-task learning. The comparative results with the baselines showed that our model is competitive as the improvement was 1.49% ∼9.49% by Micro-F1 and 1.79% ∼ 15.23% by Macro-F1. Moreover, we found that our model works well, especially for the categorization of documents with multi-label. Future work will include: (i) incorporating lexical semantics such as named entities for further improvement, (ii) comparing our model to other deep contextualized word representation such as ELMO and BERT, and (iii) applying the method to other domains for quantitative evaluation. Acknowledgments We are grateful to the anonymous reviewers for their insightful comments and suggestions. We also thank Dr. Bernardo Magnini who provided us WORDNET-DOMAINS-3.2 data. This work was supported by the Grant-in-aid for JSPS, Grant Number 17K00299, and Support Center for Advanced Telecommunications Technology Research, Foundation. References E. Agirre and A. Soroa. 2009. Personalizing Pagerank for Word Sense Disambiguation. In Proc. of the 12th Conference of the European Chapter of the Association for Computational Linguistics, pages 33– 41. 1118 A. Ambartsoumian and F. Popowich. 2018. SelfAttention: A Better Building Block for Sentiment Analysis Neural Network Classifiers. In Proc. of the 9th Workshop on Computational Approaches to Subjectivity, Sentiment and Social Media Analysis, pages 130–139. R. Collobert and J. Weston. 2008. A Unified Architecture for Natural Language Processing: Deep Neural Networks with Multitask Learning. In Proc of the 25th International Conference on Machine Learning (ICML), pages 160–167. J. Devlin, M-W. Chang, K. Lee, and K. Toutanova. 2018. Bert: Pre-Training of Deep Bidirectional Transformers for Language Understanding. In arXiv:1810.04805. B. P. Douglas and M. B. Janet. 1992. The Design for Wall Street Journal-based CSR Corpus. In Proc of the HLT’91 Workshop on Speech and Natural Language, pages 357–362. S. Faralli and R. Navigli. 2012. A New MinimallySupervised Framework for Domain Word Sense Disambiguation. In Proc. of the 2012 Joint Conference on Empirical Methods in Natural Language Processing and Computational Natural Language Learning, pages 1411–1422. X. Glorot, A. Bordes, and Y. Bengio. 2011. Domain Adaptation for Large-Scale Sentiment Classification: A Deep Learning Approach. In Proc of the 28th Ingernational Conference on Machine Learning, pages 513–520. P. Jin, D. McCarthy, R. Koeling, and J. Carroll. 2009. Estimating and Exploiting the Entropy of Sense Distributions. In Proc of the North American Chapter of the Association for Computational Linguistics - Human Language Technologies (NAACL-HLT) 2009, pages 233–236. R. Johnson and T. Zhang. 2015. Effective Use of Word Order for Text Categorization with Convolutional Neural Networks. In Proc of the 2015 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 103–112. A. Joulin, E. Grave, P. Bojanowski, and T. Mikolov. 2017. Bag of Tricks for Efficient Text Classification. In Proc. of the 15th Conference of the European Chapter of the Association for Conputational Linguistics, pages 427–431. Y. Kim. 2014. Convolutional Neural Networks for Sentence Classification. In Proc. of the 2014 Conference on Empirical Methods in Natural Language Processing, pages 1746–1751. J. H. Lau, P. Cook, D. McCarthy, S. Gella, and T. Baldwin. 2014. Learning Word Sense Distribution, Detecting Unattested Senses and Identifying Novel Senses using Topic Models. In Proc of the 52nd Annual Meeting of the Association for Computational Linguistics, pages 259–270. D. D. Lewis, Y. Yang, T. G. Rose, and F. Li. 2004. RCV1: A New Benchmark Collection for Text Categorization Research. Journal of Machine Learning Research, 5:361–397. J. Liu, W-C. Chang, Y. Wu, and Y. Yang. 2017a. Deep Learning for Extreme Multi-label Text Classification. In Proc of the 40th International ACM SIGIR Conference on Research and Development in Information Retrieval, pages 115–124. P. Liu, X. Qiu, and X. Huang. 2017b. Adversarial Multi-Task Learning for Text Classification. In Proc of the 55th Annual Meeting of the Association for Computational Linguistics, pages 1–10. P. Liu, X. Qiu, and Z. Huang. 2016. Recurrent Neural Network for Text Classification with Multi-task Learning. In Proc of the 25th International Joint Conference on Artificial Intelligence (IJCAI’16), pages 2873–2879. X. Liu, J. Gao, X. He, L. Deng, K. Duh, and Y-Y. Wang. 2015. Representation Learning using Multi-Task Deep Neural Networks for Semantic Classification and Information Retrieval. In Proc of the 2015 Annual Conference of the North American Chapter of the Association for Computational Linguistics, pages 912–921. B. Magnini and G. Cavaglia. 2000. Integrating Subject Field Codes into WordNet. In Proc. of the International Conference on Language Resources and Evaluation, pages 1413–1418. B. Magnini, C. Strapparava, G. Pezzulo, and A. Gliozzo. 2002. The Role of Domain Information in Word Sense Disambiguation. Natural Language Engineering, 8:359–373. C. D. Manning, M. Surdeanu, J. Bauer, J. Finkel, S. J. Bethard, and D. McClosky. 2014. The Stanford Core NLP Natural Language Processing Toolkit. In Proc. of the 52nd Annual Meeting of the Association for Computational Linguistics: System Demonstrations, pages 55–60. D. McCarthy, R. Koeling, J. Weeds, and J. Carroll. 2004. Finding Predominant Word Senses in Untagged Text. In Proc. of the 42nd Annual Meeting on Association for Computational Linguistics, pages 279–286. D. McCarthy, R. Koeling, J. Weeds, and J. Carroll. 2007. Unsupervised Acquisition of Predominant Word Senses. Computational Linguistics, 34(4):553–590. O. Melamud, J. Goldberger, and I. Dagan. 2016. Context2vec: Learning Generic Context Embedding with Bidirectional LSTM. In Proc. of the 20th SIGNLL Conference on Computational Natural Language Learning, pages 51–61. 1119 T. Mikolov, K. Chen, G. Corrado, and J. Dean. 2013. Efficient Estimation of Word Representations in Vector Space. In Proc. of the International Conference on Learning Representations Workshop. M. Palmer, C. Cotton, S. L. Delfs, and H. T. Dang. 2001. English Tasks: All-Words and Verb Lexical Sample. In Proc. of SENSEVAL-2 Second International Workshop on Evaluating Word Sense Disambiguation Systems, Association for Computational Linguistics, pages 21–24. M. E. Peters, M. Neumann, M. Iyyer, M. Gardner, C. Clark, K. Lee, and L. Zettlemoyer. 2018. Deep Contextualized Word Representations. In Proc. of the 16th Anual Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies (NAACLHLT), pages 2227–2237. A. Raganato, C. D. Bovi, and R. Navigli. 2017. Neural Sequence Learning Models for Word Sense Disambiguation. In Proc. of the Conference on Empirical Methods in Natural Language Processing, pages 1156–1167. T. Rose, M. Stevenson, and M. Whitehead. 2002. The Reuters Corpus Volume 1 - from Yesterday’s News to Tomorrow’s Language Resources. In Proc. of Language Resources and Evaluation. B. Snyder and M. Palmer. 2004. The English AllWords Task. In Proc. of SENSEVAL-3: Third International Workshop on the Evaluation of Systems for the Semantic Analysis of Text, Association for Computational Linguistics, pages 41–43. E. Strubell, P. Verga, D. Andor, D. Weiss, and A. McCallum. 2018. Linguistically-Informed SelfAttention for Semantic Role Labeling. In Proc. of the 2018 Conference on Empirical Methods in Natural Language Processing, pages 5027–5038. K. Taghipour and H. T. Ng. 2015. Semi-Supervised Word Sense Disambiguation Using Word Embeddings in General and Specific Domains. In Proc. of the 2015 Annual Conference of the North American Chapter of the Association for Computational Linguistics, pages 314–323. A. Vaswani, N. Shazeer, N. Parmar, J. Uszkoreit, L. Jones, A. N. Gomez, L. Kaiser, and I. Polosukhin. 2017. Attention Is All You Need. In Proc. of the NIPS. J. Wang, Z. Wang, D. Zhang, and J. Yan. 2017. Combining Knowledge with Deep Convolutional Neural Networks for Short Text Classification. In Proc. of the 26th International Joint Conference on Artificial Intelligence, pages 2915–2921. P. Wang, J. Xu, B. Xu, C-L. Liu, H. Zhang, F. Wang, and H. Hao. 2015. Semantic Clustering and Convolutional Neural Network for Short Text Categorization. In Proc. of the 53rd Annual Meeting of the Association for Computational Linguistics and the 7th International Joint Conference on Natural Language Processing, pages 352–357. L. Xiao, H. Zhang, and W. Chen. 2018. Gated MultiTask Network for Text Classification. In Proc. of the 2018 Annual conference of the North American Chapter of the Association for Computational Linguistics, pages 726–731. Z. Yang, D. Yang, C. Dyer, X. He, A. Smola, and E. Hovy. 2016. Hierarchical Attention Networks for Document Classification. In Proc. of the 2016 Conference of the North American Chapter of the Association for Computational Linguistics Human Language Technologies, pages 1480–1489. D. Yuan, J. Richardson, R. Doherty, C. Evans, and E. Altendorf. 2016. Semi-Supervised Word Sense Disambiguation with Neural Models. In Proc. of the 26th International Conference on Computational Linguistics, pages 1374–1385. R. Zhang, H. Lee, and D. Radev. 2016. Dependency Sensitive Convolutional Neural Networks for Modeling Sentences and Documents. In Proc. of the 2016 Conference of the North American Chapter of the Association for Computational Linguistics Human Language Technologies, pages 1512–1521. X. Zhang, J. Zhao, and Y. LeCun. 2015. CharacterLevel Convolutional Networks for Text Classification. In Advances in Neural Information Processing systems, pages 649–657. Y. Zhang, M. Lease, and B. C. Wallace. 2017. Exploiting Domain Knowledge via Grouped Weight Sharing with Application to Text Categorization. In Proc. of the 55th Annual Meeting of the Association for Computational Linguistics, pages 155–160. Y. Zhang and B. C. Wallace. 2015. A Sensitivity Analysis of (and Practitioners’ Guide to) Convolutional Neural Networks for Sentence Classification. Computing Research Repository.
2019
105
Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 1120–1130 Florence, Italy, July 28 - August 2, 2019. c⃝2019 Association for Computational Linguistics 1120 DeepSentiPeer: Harnessing Sentiment in Review Texts To Recommend Peer Review Decisions Tirthankar Ghosal, Rajeev Verma, Asif Ekbal, Pushpak Bhattacharyya Department of Computer Science and Engineering Indian Institute of Technology Patna, India (tirthankar.pcs16,rajeev.ee15,asif,pb)@iitp.ac.in Abstract Automatically validating a research artefact is one of the frontiers in Artificial Intelligence (AI) that directly brings it close to competing with human intellect and intuition. Although criticized sometimes, the existing peer review system still stands as the benchmark of research validation. The present-day peer review process is not straightforward and demands profound domain knowledge, expertise, and intelligence of human reviewer(s), which is somewhat elusive with the current state of AI. However, the peer review texts, which contains rich sentiment information of the reviewer, reflecting his/her overall attitude towards the research in the paper, could be a valuable entity to predict the acceptance or rejection of the manuscript under consideration. Here in this work, we investigate the role of reviewers sentiments embedded within peer review texts to predict the peer review outcome. Our proposed deep neural architecture takes into account three channels of information: the paper, the corresponding reviews, and the review polarity to predict the overall recommendation score as well as the final decision. We achieve significant performance improvement over the baselines (∼29% error reduction) proposed in a recently released dataset of peer reviews. An AI of this kind could assist the editors/program chairs as an additional layer of confidence in the final decision making, especially when non-responding/missing reviewers are frequent in present day peer review. 1 Introduction The rapid increase in research article submissions across different venues is posing a significant management challenge for the journal editors and conference program chairs1. Among 1Apparently CVPR, NIPS, AAAI 2019 received over 5100, 4900, 7000 submissions respectively! the load of works like assigning reviewers, ensuring timely receipt of reviews, slot-filling against the non-responding reviewer, taking informed decisions, communicating to the authors, etc., editors/program chairs are usually overwhelmed with many such demanding yet crucial tasks. However, the major hurdle lies in to decide the acceptance and rejection of the manuscripts based on the reviews received from the reviewers. The quality, randomness, bias, inconsistencies in peer reviews is well-debated across the academic community (Bornmann and Daniel, 2010). Due to the rise in article submissions and nonavailability of expert reviewers, editors/program chairs are sometimes left with no other options than to assign papers to the novice, out of domain reviewers which sometimes results in more inconsistencies and poor quality reviews. To study the arbitrariness inherent in the existing peer review system, organisers of the NIPS 2014 conference assigned 10% submissions to two different sets of reviewers and observed that the two committees disagreed for more than quarter of the papers (Langford and Guzdial, 2015). Again it is quite common that a paper rejected in one venue gets the cut in another with little or almost no improvement in quality. Many are of the opinion that the existing peer review system is fragile as it only depends on the view of a selected few (Smith, 2006). Moreover, even a preliminary study into the inners of the peer review system is itself very difficult because of data confidentiality and copyright issues of the publishers. However, the silver lining is that the peer review system is evolving with the likes of OpenReviews2, author response periods/rebuttals, increased effective communications between authors and reviewers, open access initiatives, peer review workshops, review forms with 2https://openreview.net 1121 objective questionnaires, etc. gaining momentum. The PeerRead dataset (Kang et al., 2018) is an excellent resource towards research and study on this very impactful and crucial problem. With our ongoing effort towards the development of an Artificial Intelligence (AI)-assisted peer review system, we are intrigued with: What if there is an additional AI reviewer which predicts decisions by learning the high-level interplay between the review texts and the papers? How would the sentiment embedded within the review texts empower such decision-making? Although editors/program chairs usually go by the majority of the reviewer recommendations, they still need to go through all the review texts corresponding to all the submissions. A good use case of this research would be: slot-filling the missing reviewer, providing an additional perspective to the editor in cases of contrasting/borderline reviews. This work in no way attempts to replace the human reviewers; instead, we are intrigued to see how an AI can act as an additional reviewer with inputs from her human counterparts and aid the decision-making in the peer review process. We develop a deep neural architecture incorporating full paper information and review text along with the associated sentiment to predict the acceptability and recommendation score of a given research article. We perform two tasks, a classification (predicting accept/reject decision) and a regression (predicting recommendation score) one. The evaluation shows that our proposed model successfully outperforms the earlier reported results in PeerRead. We also show that the addition of review sentiment component significantly enhances the predictive capability of such a system. 2 Related Work Artificial Intelligence in academic peer review is an important yet less explored territory. However, with the recent progress in AI research, the topic is gradually gaining attention from the community. Price and Flach (2017) did a thorough study of the various means of computational support to the peer review system. Mrowinski et al. (2017) explored an evolutionary algorithm to improve editorial strategies in peer review. The famous Toronto Paper Matching system (Charlin and Zemel, 2013) was developed to match paper with reviewers. Recently we (Ghosal et al., 2018b,a) investigated the impact of various features in the editorial pre-screening process. Wang and Wan (2018) explored a multi-instance learning framework for sentiment analysis from the peer review texts. We carry our current investigations on a portion of the recently released PeerRead dataset (Kang et al., 2018). Study towards automated support for peer review was otherwise not possible due to the lack of rejected paper instances and corresponding reviews. Our approach achieves significant performance improvement over the two tasks defined in Kang et al. (2018). We attribute this to the use of deep neural networks and augmentation of review sentiment information in our architecture. 3 Data Description and Analysis The PeerRead dataset consists of papers, a set of associated peer reviews, and corresponding accept/reject decisions with aspect specific scores of papers collected from several top-tier Artificial Intelligence (AI), Natural Language Processing (NLP) and Machine Learning (ML) conferences. Table 1 shows the data we consider in our experiments. We could not consider NIPS and arXiv portions of PeerRead due to the lack of aspect scores and reviews, respectively. For more details on the dataset creation and the task, we request the readers to refer to Kang et al. (2018). We further use the submissions of ICLR 2018, corresponding reviews and aspect scores to boost our training set for the decision prediction task. One motivation of our work stems from the finding that aspect scores for certain factors like Impact, Originality, Soundness/Correctness which are seemingly central to the merit of the paper, often have very low correlation with the final recommendation made by the reviewers as is made evident in Kang et al. (2018). However, from the heatmap in Figure 1 we can see that the reviewer’s sentiments (compound/positive) embedded within the review texts have visible correlations with the aspects like Recommendation, Appropriateness and Overall Decision. This also seconds our recent finding that determining the scope or appropriateness of an article to a venue is the first essential step in peer review (Ghosal et al., 2018a). Since our study aims at deciding the fate of the paper, we take predicting recommendation score and overall decision as the objectives of our investigation. Thus our proposal to augment sentiment of reviews to the deep neural architecture seems intuitive. 1122 Venues #Papers #Reviews Aspect Acc/Rej ICLR 2017 427 7270 Y 172/255 ACL 2017 137 275 Y 88/49 CoNLL 2016 22 39 Y 11/11 ICLR 2018 909 2741 Only Rec 336/573 Total 1495 10325 – 607/888 Table 1: Dataset Statistics com neg neu pos A1 A2 A3 A4 A5 A6 A7 A8 D 0.15 0.00 0.15 0.30 Figure 1: Pearson Correlation of Review Sentiment (:X) with different Aspect Scores (:Y) on ACL 2017 dataset. A1→Appropriateness, A2→Clarity, A3→Impact, A4→Meaningful Comparison, A5→Originality, A6→Recommendation, A7→Soundness/Correctness, A8→Substance, D→Decision. pos→Positive Sentiment Score, neg→Negative Sentiment Score, neu→Neutral Sentiment Score, com→Compound Sentiment Score. To calculate the sentiment polarity of a review text, we take the average of the sentence wise sentiment scores from Valence Aware Dictionary and sEntiment Reasoner (VADER) (Hutto and Gilbert, 2014). 4 Methodology 4.1 Pre-processing At the very beginning, we convert the papers in PDF to .json encoded files using the Science Parse3 library. 4.2 DeepSentiPeer Architecture Figure 2 illustrates the overall architecture we employ in our investigation. The left segment is for the decision prediction while the right segment predicts the overall recommendation score. 4.2.1 Document Encoding We extract full-text sentences from each research article and represent each sentence si ∈Rd using the Transformer variant of the Universal Sentence 3https://github.com/allenai/science-parse Encoder (USE) (Cer et al., 2018), d is the dimension of the sentence semantic vector which is 512. A paper is then represented as, P = s1 ⊕s2 ⊕... ⊕sn1, P ∈R n1 × d ⊕being the concatenation operator, n1 is the maximum number of sentences in a paper text in the entire dataset (padding is done wherever necessary). Similarly, we do this for each of the reviews and create a review representation as R = s1 ⊕s2 ⊕... ⊕sn2, R ∈R n2 × d n2 being the maximum number of sentences in the reviews. 4.2.2 Sentiment Encoding The sentiment encoding of the review is done using VADER Sentiment Analyzer. For a sentence si, VADER gives a vector Si, Si ∈R4. The review is then encoded (padded where necessary) for sentiment as rsenti = S1 ⊕S2 ⊕... ⊕Sn2, rsenti ∈Rn2×4. 4.2.3 Feature Extraction with Convolutional Neural Network We make use of a Convolutional Neural Network (CNN) to extract features from both the paper and review representations. CNN has shown great success in solving the NLP problems in recent years. The convolution operation works by sliding a filter Wfk ∈Rl×d to a window of length l, the output of such hth window is given as, fk h = g(Wfk · Xh−l+1:h + bk) Xh−l+1:h means the l sentences within the hth window in Paper P. bk is the bias for the kth filter, g() is the non-linear function. The feature map fk for the kth filter is then obtained by applying this 1123 Figure 2: DeepSentiPeer: A Sentiment Aware Deep Neural Architecture to Predict Reviewer Recommendation Score. Decision-Level Fusion and Feature-Level Fusion of Sentiment are shown for Task 1 and Task 2, respectively. filter to each possible window of sentences in the P as fk = [fk 1, fk 2, ..., fk h, ..., fk n1−l+1], fk ∈Rn1−l+1. We then apply a max-pooling operation to this filter map to get the most significant feature, ˆfk as ˆfk = max(fk). For a paper P, the final output of this convolution filter is then given as p = [ˆf1, ˆf2, ..., ˆfk, ..., ˆfF ], p ∈RF , F is the total number of filters used. In the same way, we can get r as the output of the convolution operator for the Review R. We call the outputs p and r as the high-level representation feature vector of the paper and the review, respectively. We then concatenate these feature vectors (Feature-Level Fusion). The reason we extract features from both is to simulate the editorial workflow, wherein ideally, the editor/chair would look at both into the paper and the corresponding reviews to arrive at a judgement. 4.2.4 Multi-layer Perceptron We employ a Multi-Layer Perceptron (MLP Predict) to take the joint paper+review representations xpr as input to get the final 1124 Baselines Task 1 → Aspect Score Prediction (RMSE) Test Datasets → ICLR ‡ ACL † CoNLL † Approaches ↓ 2017 2017 2016 Majority Baseline 1.6940 2.7968 2.9133 Mean Baseline 1.6095 2.4900 2.6086 Only Paper (Kang et al., 2018) 1.6462 2.7278 3.0591 Comparing Systems Only Review (Kang et al., 2018) 1.6955 2.7062 2.7072 Paper+Review (Kang et al., 2018) 1.6496 2.5011 2.9734 Only Review 1.5812 2.7191 2.6537 Proposed Architecture Review+Sentiment 1.4521 2.6845 2.5524 DeepSentiPeer Paper+Review+Sentiment 1.1679 2.3790 2.5399 Table 2: Results on Aspect Score Prediction Task. Training is done with only ICLR 2017 papers/reviews, † → Cross-Domain: Training on ICLR and testing upon entire data of ACL/CoNLL available in PeerRead dataset, ‡ → Test set is kept the same as (Kang et al., 2018), RMSE→Root Mean Squared Error. CNN variant as in (Kang et al., 2018) is used as the comparing system. representation as xpr = fMLP Predict(θpredict; [p, r]), where θpredict represents the parameters of the MLP Predict. We also extract features from the review sentiment representation xrs via another MLP (MLP Senti). xrs = fMLP Senti(θsenti; rsenti), θsenti being the parameters of MLP Senti. Finally, we fuse the extracted review sentiment feature and joint paper+review representation together to generate the overall recommendation score (DecisionLevel Fusion) using the affine transformation as prediction = (Wd · [xpr, xrs] + bd). We minimize the Mean Square Error (MSE) between the actual and predicted recommendation score. The motivation here is to augment the human judgement (review+embedded sentiment) regarding the quality of a paper in decision making. The long-term objective is to have the AI learn the notion of good and bad papers from the human perception reflected in peer reviews in correspondence with paper full-text. 4.2.5 Accept/Reject Decisions Instead of training the deep network on overall recommendation scores, we train the network with the final decisions of the papers in a classification setting. The entire setup is same but we concatenate all the reviews of a particular paper together to get the review representation. And rather than doing decision-level fusion, we perform featurelevel fusion where the decision is given as xprs = fMLP Predict(θ; [p,er,exrs]) c = Softmax(Wc · xprs + bc), where c is the output classification distribution across accept or reject classes. er is the high-level representation of review text after concatenating all reviews corresponding to a paper and exrsis the output of MLP Senti on the concatenated review text. We minimize Cross-Entropy Loss between predicted c and actual decisions. 4.3 Experimental Setup As we mention earlier, we undertake two tasks: Task 1: Predicting the overall recommendation score (Regression) and Task 2: Predicting the Accept/Reject Decision (Classification). To compare with Kang et al. (2018), we keep the experimental setup (train vs test ratio) identical and re-implement their codes to generate the comparing figures. However, Kang et al. (2018) performed Task 2 on ICLR 2017 dataset with handcrafted features, and Task 1 in a deep learning setting. Since our approach is a deep neural network based, we crawl additional paper+reviews from ICLR 2018 to boost the training set. For Task 1, n1 is 666 and n2 is 98 while for Task 2, n1 is 1494 and n2 is 525. We employ a grid search for hyperparameter optimization. For Task 1, F is 256, l is 5. ReLU is the non-linear function g(), learning rate is 0.007. We train the model with SGD optimizer, set momentum as 0.9 1125 Baseline Task 2 → Accept/Reject (Accuracy) Test Datasets → ICLR ‡ ACL † CoNLL † Approaches ↓ 2017 2017 2016 Majority Baseline 60.52 33.33 39.94 Comparing System Only Paper (Kang et al., 2018) 55.26∗ 35.93 41.23 Only Review 65.35 57.12 62.91 Proposed Architecture Review+Sentiment 69.79 59.31 62.22 DeepSentiPeer Paper+Review+Sentiment 71.05 64.76 67.71 Table 3: Results on Accept/Reject Classification Tasks. Training is done with ICLR 2017+ICLR 2018 papers/reviews, † →Cross-Domain: Training on ICLR and testing upon the entire data of ACL/CoNLL, ‡Test Set is kept the same as (Kang et al., 2018), RMSE→Root Mean Squared Error, ∗→65.79% if only trained with ICLR 2017, Comparing System (Kang et al., 2018) is feature-based and considers only paper, and not the reviews. and batch size as 32. We keep dropout at 0.5. We use the same number of filters with the same kernel size for both paper and review. In Task 2, for Paper CNN F is 128, l is 7 and for Review CNN F is 64 and l is 5. Again we train the model with Adam Optimizer, keep the batch size as 64 and use 0.7 as the dropout rate to prevent overfitting. We intentionally keep our CNN/MLP shallow due to less training data. We make our codes 4 available for further explorations. 5 Results and Analysis Table 2 and Table 3 show our results for both the tasks. We propose a simple but effective architecture in this work since our primary intent is to establish that a sentiment-aware deep architecture would better suit these two problems. For Task 1, we can see that our review sentiment augmented approach outperforms the baselines and the comparing systems by a wide margin (∼ 29% reduction in error) on the ICLR 2017 dataset. With only using review+sentiment information, we are still able to outperform Kang et al. (2018) by a margin of 11% in terms of RMSE. A further relative error reduction of 19% with the addition of paper features strongly suggests that only review is not sufficient for the final recommendation. A joint model of the paper content and review text (the human touch) augmented with the underlying sentiment would efficiently guide the prediction. For Task 2, we observe that the handcrafted feature-based system by Kang et al. (2018) performs inferior compared to the baselines. This is because the features were very naive and did not 4https://github.com/aritzzz/DeepSentiPeer address the complexity involved in such a task. We perform better with a relative improvement of 28% in terms of accuracy, and also our system is end-toend trained. Presumably, to some extent, our deep neural network learned to distinguish between the probable accept versus probable reject by extracting useful information from the paper and review data. 5.1 Cross-Domain Experiments With the additional (but less) data of ACL 2017 and CoNLL 2016 in PeerRead, we perform the cross-domain experiments. We do training with the ICLR data (core Machine Learning papers) and take the test set from the NLP conferences (ACL/CoNLL). NLP nowadays is mostly machine learning (ML) centric, where we find several applications and extensive usage of ML algorithms to address different NLP problems. Here we observe a relative error reduction of 4.8% and 14.5% over the comparing system for ACL 2017 and CoNLL 2016, respectively (Table 2). For the decision prediction task, the comparing system performs even worse, and we outperform them by a considerable margin of 28% (ACL 2017) and 26% (CoNLL 2017), respectively (Table 3). The reason is that the work reported in Kang et al. (2018) relies on elementary handcrafted features extracted only from the paper; does not consider the review features whereas we include the review features along with the sentiment information in our deep neural architecture. However, we also find that our approach with only Review+Sentiment performs inferior to the Paper+Review method in Kang et al. (2018) for ACL 2017. This again seconds that inclusion of paper is vital in recommendation decisions. Only paper is enough for a human reviewer, but with the current state of AI, an AI 1126 reviewer would need the supervision of her human counterparts to arrive at a recommendation. So our system is suited to cases where the editor needs an additional judgment regarding a submission (such as dealing with missing/non-responding reviewers, an added layer of confidence with an AI which is aware of the past acceptances/rejections of a specific venue). 6 Analysis: Effect of Sentiment on Reviewer’s Recommendation Figure 3: Projections of the output activations of the final layer of MLP Senti. Points are annotated for Reviews from Table 4. X: Predicted Recommendation Scores, Y: Sentiment Activations Scores PC Actual vs Prediction 0.97 Prediction vs Sentiment Activations -0.93 Actual vs Sentiment Activations -0.91 Table 4: Pearson Correlation (PC) Coefficient between the Recommendation Scores and Sentiment Activations. This is to account for the fact that sentiment is actually correlated with the prediction signifying the strength of the model. Figure 3 shows the output activations5 from the final layer of MLP Senti against the predicted recommendation scores. We can see that the papers are discriminated into visible clusters according to their recommendation scores. This proves that DeepSentiPeer can extract useful features in close correspondence to human judgments. From Figure 3 and Table 4, we see that the sentiment activations are strongly correlated (negatively) with 5We call them as Sentiment Activations the actual and predicted recommendation scores. Therefore, we hypothesize that our model draws considerable strength if the review text has proper sentiment embedded in it. To further investigate this, we sample the papers/reviews from the ICLR 2017 test set. We consider actual review text and the sentiment embedded therein to examine the performance of the system (See Table 5). We truncate the lengthy review texts and provide the OpenReview links for reference. Appendix A shows the heatmaps of Vader sentiment scores generated for individual sentences corresponding to each paper review in Table 5. We hereby acknowledge that since the scholarly review texts are mostly objective and not straightforward, the score for neutral polarity is strong as opposed to positive, and negative. But still, we can see visible polarities for review sentences which are positive or negative in sentiment. For instance, the second last sentence(s9): “The paper is not well written either” from R1 has visible negative weight in the heatmap (Figure 5 in Appendix A). Same can be observed for the other review sentences as well. ACC REJ Predicted Label ACC REJ True Label 0.70 0.30 0.28 0.72 0.3 0.5 0.7 Figure 4: Normalized Confusion Matrix for Accept/Reject Decisions on ICLR 2017 test data with DeepSentiPeer(Paper+Review+Sentiment) model. Besides the objective evaluation of the paper in the peer reviews, the reviewer’s opinion in the peer review text holds strong correspondence with the overall recommendation score. We can qualitatively see that the reviews R1, R2, and R3 are polarized towards the negative sentiment (Table 5). Our model can efficiently predict a reasonable recommendation score with respect to human judgment. Same we can say for R7 where the review mostly signifies a positive sentiment polarity. R6 provides an interesting observation. We see that the review R6 is not very expressive for such a 1127 # Paper Title Review Text Prediction Actual Senti Act R1 Multi-label learning with the RNNs for Fashion Search —The technical contribution of this paper is not clear. Most of the approaches used are standard state-of-art methods and there are not much novelties. For a multi-label recognition task, there are other available methods, e.g. using binary models, changing cross-entropy loss function, etc. There is not any comparison between the RNN method and other simple baselines. The order of the sequential RNN prediction is not clear either. It seems that the attributes form a tree hierarchy, and that is used as the order of sequence. The paper is not well written either.— https://openreview.net/forum?id=HyWDCXjgx&noteId=B1Mp8grVl 4 3 0.01 R2 Transformation based Models of Video Sequences —While I agree with the authors on these points, I also find that the paper suffer from important flaws. Specifically: -the choice of not comparing with previous approaches in term of pixel prediction error seems very ”convenient”, to say the least. While it is clear that the evaluation metric is imperfect, it is not a reason to completely dismiss all quantitative comparisons with previous work. The frames output by the network on, e.g. the moving digits datasets (Figure 4), looks ok and can definitely be compared with other papers. Yet, the authors chose not to, which is suspicious.— https://openreview.net/forum?id=HkxAAvcxx&noteId= SJE7-lkVx 3 3 0.41 R3 Efficient Calculation of Polynomial Features on Sparse Matrices —Many more relevant papers should be cited from the recent literature.The experiment part is very weak. This paper claims that the time complexity of their algorithm is O(d k D k), which is an improvement over standard method O(d k) by a factor d k But in the experiments, when d=1, there is still a large gap ( 14s vs. 90s) between the proposed method and the standard one. The authors explain this as ”likely a language implementation”, which is not convincing. To fairly compare the two methods, of course you need to implement both in the same programming language and run experiments in the same environment. For higher degree feature expansion, there is no empirical experiments to show the advantage of the proposed method.— https: //openreview.net/forum?id=S1j4RqYxg&noteId=B17Fn04Vg 4 3 0.27 R4 Efficient Vector Representation for Documents through Corruption —While none of the pieces of this model are particularly novel, the result is an efficient learning algorithm for document representation with good empirical performance.Joint training of word and document embeddings is not a new idea, nor is the idea of enforcing the document to be represented by the sum of its word embeddings (see, e.g. see, e.g. ”The Sum of Its Parts”: Joint Learning of Word and Phrase Representations with Autoencoders’ by Lebret and Collobert). Furthermore, the corruption mechanism is nothing other than traditional dropout on the input layer. Coupled with the word2vec-style loss and training methods, this paper offers little on the novelty front.On the other hand, it is very efficient at generation time, requiring only an average of the word embeddings rather than a complicated inference step as in Doc2Vec. Moreover, by construction, the embedding captures salient global information about the document – it captures specifically that information that aids in local-context prediction. For such a simple model, the performance on sentiment analysis and document classification is quite encouraging.Overall, despite the lack of novelty, the simplicity, efficiency, and performance of this model make it worthy of wider readership and study, and I recommend acceptance.—https://openreview.net/ forum?id=B1Igu2ogg&noteId=rJBM9YbVg 6 7 -1.04 R5 R5 Towards a Neural Statistician —Hierarchical modeling is an important and high impact problem, and I think that it’s underexplored in the Deep Learning literature.Pros:-The few-shot learning results look good, but I’ mm not an expert in this area.-The idea of using a ”double” variational bound in a hierarchical generative model is well presented and seems widely applicable. Questions:-When training the statistic network, are minibatches (i.e. subsets of the examples) used?-If not, does using minibatches actually give you an unbiased estimator of the full gradient (if you had used all examples)? For example, what if the statistic network wants to pull out if *any* example from the dataset has a certain feature and treat that as the characterization.This seems to fit the graphical model on the right side of figure 1. If your statistic network is trained on minibatches, it won’t be able to learn this characterization, because a given minibatch will be missing some of the examples from the dataset.Using minibatches (as opposed to using all examples in the dataset) to train the statistic network seems like it would limit the expressive power of the model— https://openreview.net/forum?id=HJDBUF5le&noteId=HyWm1orEx 6 8 -0.65 R6 A recurrent neural network without chaos The authors of the paper set out to answer the question whether chaotic behaviour is a necessary ingredient for RNNs to perform well on some tasks.For that question’s sake,they propose an architecture which is designed to not have chaos. The subsequent experiments validate the claim that chaos is not necessary.This paper is refreshing. Instead of proposing another incremental improvement, the authors start out with a clear hypothesis and test it. This might set the base for future design principles of RNNs.The only downside is that the experiments are only conducted on tasks which are known to be not that demanding from a dynamical systems perspective; it would have been nice if the authors had traversed the set of data sets more to find data where chaos is actually necessary. https://openreview.net/forum?id=S1dIzvclg&noteId= H1LYxY84l 5 8 -1.01 R7 Batch Policy Gradient Methods for Improving Neural Conversation Models The author propose to use a off-policy actor-critic algorithm in a batch-setting to improve chatbots.The approach is well motivated and the paper is well written, except for some intuitions for why the batch version outperforms the on-line version (see comments on ”clarification regarding batch vs. online setting”).The artificial experiments are instructive, and the real-world experiments were performed very thoroughly although the results show only modest improvement. https://openreview.net/forum?id=rJfMusFll&noteId=H1bSmrx4x 7 7 -1.77 Table 5: A qualitative study of the effect of sentiment in the overall recommendation score prediction. Prediction →is the overall recommendation score predicted by our system, Actual →is the recommendation score given by reviewers. Senti Act are the output activations from the final layer of MLP Senti which are augmented to the decision layer for final recommendation score prediction. The correspondence between the sentiment embedded within the review texts and Sentiment Activations are fairly visible in Figure 3. Kindly refer to Appendix A for polarity strengths in individual review sentences. The OpenReview links in the table above give the full review texts. 1128 high recommendation score 8. It starts with introducing the authors work and listing the strengths and limitations of the work without much (and necessary) details. Our model hence predicts 5 as the recommendation score. Whereas R4 can be seen as the case of a usual well-written review, expressing the positive and negative aspects of the paper coherently. Our model predicts 6 for an actual recommendation score of 7. These validate the role of the reviewer’s opinion and sentiment to predict the recommendation score, and our model is competent enough to take into account the overall polarity of the review-text to drive the prediction. Figure 4 presents the confusion matrix of our proposed model on ICLR 2017 test data for Task 2. 7 Conclusion Here in this work, we show that the reviewer sentiment information embedded within peer review texts could be leveraged to predict the peer review outcomes. Our deep neural architecture makes use of three information channels: the paper full-text, corresponding peer review texts and the sentiment within the reviews to address the complex task of decision making in peer review. With further exploration, we aim to mould the ongoing research to an efficient AI-enabled system that would assist the journal editors or conference chairs in making informed decisions. However, considering the sensitivity of the topic, we would like to further dive deep into exploring the subtle nuances that leads into the grading of peer review aspects. We found that review reliability prediction should prelude these tasks since not all reviews are of equal quality or are significant to the final decision making. We aim to include review reliability prediction in the pipeline of our future work. However, we are in consensus that scholarly language processing is not straightforward. We need stronger, pervasive models to capture the high-level interplay of the paper and peer reviews to decide the fate of a manuscript. We intend to work upon those and also explore more sophisticated techniques for sentiment polarity encoding. Acknowledgements The first author, Tirthankar Ghosal, acknowledges Visvesvaraya PhD Scheme for Electronics and IT, an initiative of Ministry of Electronics and Information Technology (MeitY), Government of India for fellowship support. The third author, Asif Ekbal, acknowledges Young Faculty Research Fellowship (YFRF), supported by Visvesvaraya PhD scheme for Electronics and IT, Ministry of Electronics and Information Technology (MeitY), Government of India, being implemented by Digital India Corporation (formerly Media Lab Asia). We also thank Elsevier Center of Excellence for Natural Language Processing, Indian Institute of Technology Patna for adequate infrastructural support to carry out this research. Finally, we appreciate the anonymous reviewers for their critical evaluation of our work and suggestions to carry forward from here. References Lutz Bornmann and Hans-Dieter Daniel. 2010. Reliability of reviewers’ ratings when using public peer review: a case study. Learned Publishing, 23(2):124–131. Daniel Cer, Yinfei Yang, Sheng-yi Kong, Nan Hua, Nicole Limtiaco, Rhomni St. John, Noah Constant, Mario Guajardo-Cespedes, Steve Yuan, Chris Tar, Brian Strope, and Ray Kurzweil. 2018. Universal sentence encoder for english. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, EMNLP 2018: System Demonstrations, Brussels, Belgium, October 31 - November 4, 2018, pages 169–174. Laurent Charlin and Richard Zemel. 2013. The toronto paper matching system: an automated paperreviewer assignment system. Tirthankar Ghosal, Ravi Sonam, Sriparna Saha, Asif Ekbal, and Pushpak Bhattacharyya. 2018a. Investigating domain features for scope detection and classification of scientific articles. In Proceedings of the Eleventh International Conference on Language Resources and Evaluation (LREC 2018), Paris, France. European Language Resources Association (ELRA). Tirthankar Ghosal, Rajeev Verma, Asif Ekbal, Sriparna Saha, and Pushpak Bhattacharyya. 2018b. Investigating impact features in editorial pre-screening of research papers. In Proceedings of the 18th ACM/IEEE on Joint Conference on Digital Libraries, JCDL 2018, Fort Worth, TX, USA, June 0307, 2018, pages 333–334. Clayton J. Hutto and Eric Gilbert. 2014. VADER: A parsimonious rule-based model for sentiment analysis of social media text. In Proceedings of the Eighth International Conference on Weblogs and Social Media, ICWSM 2014, Ann Arbor, Michigan, USA, June 1-4, 2014. 1129 Dongyeop Kang, Waleed Ammar, Bhavana Dalvi, Madeleine van Zuylen, Sebastian Kohlmeier, Eduard H. Hovy, and Roy Schwartz. 2018. A dataset of peer reviews (peerread): Collection, insights and NLP applications. In Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, NAACL-HLT 2018, New Orleans, Louisiana, USA, June 1-6, 2018, Volume 1 (Long Papers), pages 1647–1661. John Langford and Mark Guzdial. 2015. The arbitrariness of reviews, and advice for school administrators. Commun. ACM, 58(4):12–13. Maciej J Mrowinski, Piotr Fronczak, Agata Fronczak, Marcel Ausloos, and Olgica Nedic. 2017. Artificial intelligence in peer review: How can evolutionary computation support journal editors? PloS one, 12(9):e0184711. Simon Price and Peter A. Flach. 2017. Computational support for academic peer review: a perspective from artificial intelligence. Commun. ACM, 60(3):70–79. Richard Smith. 2006. Peer review: a flawed process at the heart of science and journals. Journal of the royal society of medicine, 99(4):178–182. Ke Wang and Xiaojun Wan. 2018. Sentiment analysis of peer review texts for scholarly papers. In The 41st International ACM SIGIR Conference on Research & Development in Information Retrieval, SIGIR 2018, Ann Arbor, MI, USA, July 08-12, 2018, pages 175–184. 1130 A Heatmaps Depicting Sentiment Polarity in Review Texts Figure 5: Heatmaps of the sentence-wise VADER sentiment polarity of reviews considered in Table 4. Reviews generally reflect the polarity of the reviewer towards the respective work. s0...sn →are the sentences in the peer review texts.
2019
106
Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 1131–1141 Florence, Italy, July 28 - August 2, 2019. c⃝2019 Association for Computational Linguistics 1131 Gated Embeddings in End-to-End Speech Recognition for Conversational-Context Fusion Suyoun Kim1, Siddharth Dalmia2 and Florian Metze2 1Electrical & Computer Engineering 2Language Technologies Institute, School of Computer Science Carnegie Mellon University {suyoung1, sdalmia, fmetze}@andrew.cmu.edu Abstract We present a novel conversational-context aware end-to-end speech recognizer based on a gated neural network that incorporates conversational-context/word/speech embeddings. Unlike conventional speech recognition models, our model learns longer conversational-context information that spans across sentences and is consequently better at recognizing long conversations. Specifically, we propose to use text-based external word and/or sentence embeddings (i.e., fastText, BERT) within an end-to-end framework, yielding significant improvement in word error rate with better conversational-context representation. We evaluated the models on the Switchboard conversational speech corpus and show that our model outperforms standard end-to-end speech recognition models. 1 Introduction In a long conversation, there exists a tendency of semantically related words, or phrases reoccur across sentences, or there exists topical coherence. Existing speech recognition systems are built at individual, isolated utterance level in order to make building systems computationally feasible. However, this may lose important conversational context information. There have been many studies that have attempted to inject a longer context information (Mikolov et al., 2010; Mikolov and Zweig, 2012; Wang and Cho, 2016; Ji et al., 2016; Liu and Lane, 2017; Xiong et al., 2018), all of these models are developed on text data for language modeling task. There has been recent work attempted to use the conversational-context information within a end-to-end speech recognition framework (Kim and Metze, 2018; Kim et al., 2018; Kim and Metze, 2019). The new end-to-end speech recognition approach (Graves et al., 2006; Graves and Jaitly, 2014; Hannun et al., 2014; Miao et al., 2015; Bahdanau et al., 2015; Chorowski et al., 2015; Chan et al., 2016; Kim et al., 2017) integrates all available information within a single neural network model, allows to make fusing conversational-context information possible. However, these are limited to encode only one preceding utterance and learn from a few hundred hours of annotated speech corpus, leading to minimal improvements. Meanwhile, neural language models, such as fastText (Bojanowski et al., 2017; Joulin et al., 2017, 2016), ELMo (Peters et al., 2018), OpenAI GPT (Radford et al., 2019), and Bidirectional Encoder Representations from Transformers (BERT) (Devlin et al., 2019), that encode words and sentences in fixed-length dense vectors, embeddings, have achieved impressive results on various natural language processing tasks. Such general word/sentence embeddings learned on large text corpora (i.e., Wikipedia) has been used extensively and plugged in a variety of downstream tasks, such as question-answering and natural language inference, (Devlin et al., 2019; Peters et al., 2018; Seo et al., 2017), to drastically improve their performance in the form of transfer learning. In this paper, we create a conversational-context aware end-to-end speech recognizer capable of incorporating a conversational-context to better process long conversations. Specifically, we propose to exploit external word and/or sentence embeddings which trained on massive amount of text resources, (i.e. fastText, BERT) so that the model can learn better conversational-context representations. So far, the use of such pre-trained embeddings have found limited success in the speech recognition task. We also add a gating mechanism to the decoder network that can integrate all the available embeddings (word, speech, conversational-context) efficiently with increase 1132 representational power using multiplicative interactions. Additionally, we explore a way to train our speech recognition model even with text-only data in the form of pre-training and joint-training approaches. We evaluate our model on the Switchboard conversational speech corpus (Godfrey and Holliman, 1993; Godfrey et al., 1992), and show that our model outperforms the sentence-level end-to-end speech recognition model. The main contributions of our work are as follows: • We introduce a contextual gating mechanism to incorporate multiple types of embeddings, word, speech, and conversationalcontext embeddings. • We exploit the external word (fastText) and/or sentence embeddings (BERT) for learning better conversational-context representation. • We perform an extensive analysis of ways to represent the conversational-context in terms of the number of utterance history, and sampling strategy considering to use the generated sentences or the true preceding utterance. • We explore a way to train the model jointly even with text-only dataset in addition to annotated speech data. 2 Related work Several recent studies have considered to incorporate a context information within a end-to-end speech recognizer (Pundak et al., 2018; Alon et al., 2019). In contrast with our method which uses a conversational-context information in a long conversation, their methods use a list of phrases (i.e. play a song) in reference transcription in specific tasks, contact names, songs names, voice search, dictation. Several recent studies have considered to exploit a longer context information that spans multiple sentences (Mikolov and Zweig, 2012; Wang and Cho, 2016; Ji et al., 2016; Liu and Lane, 2017; Xiong et al., 2018). In contrast with our method which uses a single framework for speech recognition tasks, their methods have been developed on text data for language models, and therefore, it must be integrated with a conventional acoustic model which is built separately without a longer context information. Several recent studies have considered to embed a longer context information within a end-toend framework (Kim and Metze, 2018; Kim et al., 2018; Kim and Metze, 2019). In contrast with our method which can learn a better conversationalcontext representation with a gated network that incorporate external word/sentence embeddings from multiple preceding sentence history, their methods are limited to learn conversationalcontext representation from one preceding sentence in annotated speech training set. Gating-based approaches have been used for fusing word embeddings with visual representations in genre classification task or image search task (Arevalo et al., 2017; Kiros et al., 2018) and for learning different languages in speech recognition task (Kim and Seltzer, 2018). 3 End-to-End Speech Recognition Models 3.1 Joint CTC/Attention-based encoder-decoder network We perform end-to-end speech recognition using a joint CTC/Attention-based approach with graphemes as the output symbols (Kim et al., 2017; Watanabe et al., 2017). The key advantage of the joint CTC/Attention framework is that it can address the weaknesses of the two main endto-end models, Connectionist Temporal Classification (CTC) (Graves et al., 2006) and attentionbased encoder-decoder (Attention) (Bahdanau et al., 2016), by combining the strengths of the two. With CTC, the neural network is trained according to a maximum-likelihood training criterion computed over all possible segmentations of the utterance’s sequence of feature vectors to its sequence of labels while preserving left-right order between input and output. With attentionbased encoder-decoder models, the decoder network can learn the language model jointly without relying on the conditional independent assumption. Given a sequence of acoustic feature vectors, x, and the corresponding graphemic label sequence, y, the joint CTC/Attention objective is represented as follows by combining two objectives with a tunable parameter λ : 0 ≤λ ≤1: L = λLCTC + (1 −λ)Latt. (1) Each loss to be minimized is defined as the negative log likelihood of the ground truth character 1133 sequence y∗, is computed from: LCTC ≜−ln X π∈Φ(y) p(π|x) (2) Latt ≜− X u ln p(y∗ u|x, y∗ 1:u−1) (3) where π is the label sequence allowing the presence of the blank symbol, Φ is the set of all possible π given u-length y, and y∗ 1:u−1 is all the previous labels. Both CTC and the attention-based encoderdecoder networks are also used in the inference step. The final hypothesis is a sequence that maximizes a weighted conditional probability of CTC and attention-based encoder-decoder network (Hori et al., 2017): y∗= argmax{γ log pCTC(y|x) + (1 −γ) log patt(y|x)} (4) 3.2 Acoustic-to-Words Models In this work, we use word units as our model outputs instead of sub-word units. Direct acousticsto-word (A2W) models train a single neural network to directly recognize words from speech without any sub-word units, pronunciation model, decision tree, decoder, which significantly simplifies the training and decoding process (Soltau et al., 2017; Audhkhasi et al., 2017, 2018; Li et al., 2018; Palaskar and Metze, 2018). In addition, building A2W can learn more semantically meaningful conversational-context representations and it allows to exploit external resources like word/sentence embeddings where the unit of representation is generally words. However, A2W models require more training data compared to conventional sub-word models because it needs sufficient acoustic training examples per word to train well and need to handle out-ofvocabulary(OOV) words. As a way to manage this OOV issue, we first restrict the vocabulary to 10k frequently occurring words. We then additionally use a single character unit and start-ofOOV (sunk), end-of-OOV (eunk) tokens to make our model generate a character by decomposing the OOV word into a character sequence. For example, the OOV word, rainstorm, is decomposed into (sunk) r a i n s t o r m (eunk) and the model tries to learn such a character sequence rather than generate the OOV token. From this method, we obtained 1.2% - 3.7% word error rate (WER) relative improvements in evaluation set where exists 2.9% of OOVs. 4 Conversational-context Aware Models In this section, we describe the A2W model with conversational-context fusion. In order to fuse conversational context information within the A2W, end-to-end speech recognition framework, we extend the decoder sub-network to predict the output additionally conditioning on conversational context, by learning a conversational-context embedding. We encode single or multiple preceding utterance histories into a fixed-length, single vector, then inject it to the decoder network as an additional input at every output step. Let say we have K number of utterances in a conversation. For k-th sentence, we have acoustic features (x1, · · · , xT )k and output word sequence, (w1, · · · , wU). At output timestamp u, our decoder generates the probability distribution over words (wk u), conditioned on 1) speech embeddings, attended high-level representation (ek speech) generated from encoder, and 2) word embeddings from all the words seen previously (eu−1 word), and 3) conversationalcontext embeddings (ek context), which represents the conversational-context information for current (k) utterance prediction: ek speech =Encoder(xk) (5) wk u ∼Decoder(ek context, ek word, ek speech) (6) We can simply represent such contextual embedding, ek context, by mean of one-hot word vectors or word distributions, mean(ek−1 word1 + · · · + ek−1 wordU ) from the preceding utterances. In order to learn and use the conversationalcontext during training and decoding, we serialize the utterances based on their onset times and their conversations rather than random shuffling of data. We shuffle data at the conversation level and create mini-batches that contain only one sentence of each conversation. We fill the ”dummy” input/output example at positions where the conversation ended earlier than others within the minibatch to not influence other conversations while passing context to the next batch. 1134 Figure 1: Conversational-context embedding representations from external word or sentence embeddings. 4.1 External word/sentence embeddings Learning better representation of conversationalcontext is the key to achieve better processing of long conversations. To do so, we propose to encode the general word/sentence embeddings pretrained on large textual corpora within our endto-end speech recognition framework. Another advantage of using pre-trained embedding models is that we do not need to back-propagate the gradients across contexts, making it easier and faster to update the parameters for learning a conversational-context representation. There exist many word/sentence embeddings which are publicly available. We can broadly classify them into two categories: (1) non-contextual word embeddings, and (2) contextual word embeddings. Non-contextual word embeddings, such as Word2Vec (Mikolov and Zweig, 2012), GloVe (Pennington et al., 2014), fastText (Bojanowski et al., 2017), maps each word independently on the context of the sentence where the word occur in. Although it is easy to use, it assumes that each word represents a single meaning which is not true in real-word. Contextualized word embeddings, sentence embeddings, such as deep contextualized word representations (Peters et al., 2018), BERT (Devlin et al., 2019), encode the complex characteristics and meanings of words in various context by jointly training a bidirectional language model. The BERT model proposed a masked language model training approach enabling them to also learn good “sentence” representation in order to predict the masked word. In this work, we explore both types of embeddings to learn conversational-context embeddings as illustrated in Figure 1. The first method is to use word embeddings, fastText, to generate 300dimensional embeddings from 10k-dimensional one-hot vector or distribution over words of each previous word and then merge into a single context vector, ek context. Since we also consider multiple word/utterance history, we consider two simple ways to merge multiple embeddings (1) mean, and (2) concatenation. The second method is to use sentence embeddings, BERT. It is used to a generate single 786-dimensional sentence embedding from 10k-dimensional one-hot vector or distribution over previous words and then merge into a single context vector with two different merging methods. Since our A2W model uses a restricted vocabulary of 10k as our output units and which is different from the external embedding models, we need to handle out-of-vocabulary words. For fastText, words that are missing in the pretrained embeddings we map them to a random multivariate normal distribution with the mean as the sample mean and variance as the sample variance of the known words. For BERT, we use its provided tokenizer to generates byte pair encodings to handle OOV words. Using this approach, we can obtain a more dense, informative, fixed-length vectors to encode conversational-context information, ek context to be used in next k-th utterance prediction. 4.2 Contextual gating We use contextual gating mechanism in our decoder network to combine the conversationalcontext embeddings with speech and word embeddings effectively. Our gating is contextual in the sense that multiple embeddings compute a gate value that is dependent on the context of multiple utterances that occur in a conversation. Using these contextual gates can be beneficial to decide how to weigh the different embed1135 dings, conversational-context, word and speech embeddings. Rather than merely concatenating conversational-context embeddings (Kim and Metze, 2018), contextual gating can achieve more improvement because its increased representational power using multiplicative interactions. Figure 2 illustrates our proposed contextual gating mechanism. Let ew = ew(yu−1) be our previous word embedding for a word yu−1, and let es = es(xk 1:T ) be a speech embedding for the acoustic features of current k-th utterance xk 1:T and ec = ec(sk−1−n:k−1) be our conversationalcontext embedding for n-number of preceding utterances sk−1−n:k−1. Then using a gating mechanism: g = σ(ec, ew, es) (7) where σ is a 1 hidden layer DNN with sigmoid activation, the gated embedding e is calcuated as e = g ⊙(ec, ew, es) (8) h = LSTM(e) (9) and fed into the LSTM decoder hidden layer. The output of the decoder h is then combined with conversational-context embedding ec again with a gating mechanism, g = σ(eC, h) (10) ˆh = g ⊙(ec, h) (11) Then the next hidden layer takes these gated activations, ˆh, and so on. Figure 2: Our contextual gating mechanism in decoder network to integrate three different embeddings from: 1) conversational-context, 2) previous word, 3) current speech. Dataset # of utter. # of conversations avg. # of utter. /conversation training 192,656 2402 80 validation 4,000 34 118 eval.(SWBD) 1,831 20 92 eval.(CH) 2,627 20 131 Table 1: Experimental dataset description. We used 300 hours of Switchboard conversational corpus. Note that any pronunciation lexicon or Fisher transcription was not used. 5 Experiments 5.1 Datasets To evaluate our proposed conversational end-toend speech recognition model, we use the Switchboard (SWBD) LDC corpus (97S62) task. We split 300 hours of the SWBD training set into two: 285 hours of data for the model training, and 5 hours of data for the hyper-parameter tuning. We evaluate the model performance on the HUB5 Eval2000 which consists of the Callhome English (CH) and Switchboard (SWBD) (LDC2002S09, LDC2002T43). In Table 1, we show the number of conversations and the average number of utterances per a single conversation. The audio data is sampled at 16kHz, and then each frame is converted to a 83-dimensional feature vector consisting of 80-dimensional log-mel filterbank coefficients and 3-dimensional pitch features as suggested in (Miao et al., 2016). The number of our word-level output tokens is 10,038, which includes 47 single character units as described in Section 3.2. Note that no pronunciation lexicon was used in any of the experiments. 5.2 Training and decoding For the architecture of the end-to-end speech recognition, we used joint CTC/Attention end-toend speech recognition (Kim et al., 2017; Watanabe et al., 2017). As suggested in (Zhang et al., 2017; Hori et al., 2017), the input feature images are reduced to (1/4 × 1/4) images along with the time-frequency axis within the two max-pooling layers in CNN. Then, the 6-layer BLSTM with 320 cells is followed by the CNN layer. For the attention mechanism, we used a location-based method (Chorowski et al., 2015). For the decoder network, we used a 2-layer LSTM with 300 cells. In addition to the standard decoder network, our proposed models additionally require extra parameters for gating layers in order to fuse 1136 Trainable External SWBD CH Model Output Units Params LM (WER%) (WER%) Prior Models LF-MMI (Povey et al., 2016) CD phones N/A  9.6 19.3 CTC (Zweig et al., 2017) Char 53M  19.8 32.1 CTC (Sanabria and Metze, 2018) Char, BPE-{300,1k,10k} 26M  12.5 23.7 CTC (Audhkhasi et al., 2018) Word (Phone init.) N/A  14.6 23.6 Seq2Seq (Zeyer et al., 2018) BPE-10k 150M*  13.5 27.1 Seq2Seq (Palaskar and Metze, 2018) Word-10k N/A  23.0 37.2 Seq2Seq (Zeyer et al., 2018) BPE-1k 150M*  11.8 25.7 Our baseline Word-10k 32M  18.2 30.7 Our Proposed Conversational Model Gated Contextual Decoder Word-10k 35M  17.3 30.5 + Decoder Pretrain Word-10k 35M  16.4 29.5 + fastText for Word Emb. Word-10k 35M  16.0 29.5 (a) fastText for Conversational Emb. Word-10k 34M  16.0 29.5 (b) BERT for Conversational Emb. Word-10k 34M  15.7 29.2 (b) + Turn number 5 Word-10k 34M  15.5 29.0 Table 2: Comparison of word error rates (WER) on Switchboard 300h with standard end-to-end speech recognition models and our proposed end-to-end speech recogntion models with conversational context. (The * mark denotes our estimate for the number of parameters used in the previous work). conversational-context embedding to the decoder network compared to baseline. We denote the total number of trainable parameters in Table 2. For the optimization method, we use AdaDelta (Zeiler, 2012) with gradient clipping (Pascanu et al., 2013). We used λ = 0.2 for joint CTC/Attention training (in Eq. 1) and γ = 0.3 for joint CTC/Attention decoding (in Eq.4). We bootstrap the training of our proposed conversational end-to-end models from the baseline endto-end models. To decide the best models for testing, we monitor the development accuracy where we always use the model prediction in order to simulate the testing scenario. At inference, we used a left-right beam search method (Sutskever et al., 2014) with the beam size 10 for reducing the computational cost. We adjusted the final score, s(y|x), with the length penalty 0.5. The models are implemented using the PyTorch deep learning library (Paszke et al., 2017), and ESPnet toolkit (Kim et al., 2017; Watanabe et al., 2017, 2018). 6 Results Our results are summarized in the Table 2 where we first present the baseline results and then show the improvements by adding each of the individual components that we discussed in previous sections, namely, gated decoding, pretraining decoder network, external word embedding, external conversational embedding and increasing receptive field of the conversational context. Our best model gets around 15% relative improvement on the SWBD subset and 5% relative improvement on the CallHome subset of the eval2000 dataset. We start by evaluating our proposed model which leveraged conversational-context embeddings learned from training corpus and compare it with a standard end-to-end speech recognition models without conversational-context embedding. As seen in Table 2, we obtained a performance gain over the baseline by using conversational-context embeddings which is learned from training set. 6.1 Pre-training decoder network Then, we observe that pre-training of decoder network can improve accuracy further as shown in Table 2. Using pre-training the decoder network, we achieved 5% relative improvement in WER on SWBD set. Since we add external parameters in decoder network to learn conversational-context embeddings, our model requires more efforts to learn these additional parameters. To relieve this issue, we used pre-training techniques to train decoder network with text-only data first. We simply used a mask on top of the Encoder/Attention layer so that we can control the gradients of batches contains text-only data and do not update the En1137 coder/Attention sub-network parameters. 6.2 Use of words/sentence embeddings Next, we evaluated the use of pretrained external embeddings (fastText and BERT). We initially observed that we can obtain 2.4% relative improvement over (the model with decoder pretraining) in WER by using fastText for additional word embeddings to the gated decoder network. We also extensively evaluated various ways to use fastText/BERT for conversational-context embeddings. Both methods with fastText and with BERT shows significant improvement from the baseline as well as vanilla conversational-context aware model. 6.3 Conversational-context Receptive Field We also investigate the effect of the number of utterance history being encoded. We tried different N = [1, 5, 9] number of utterance histories to learn the conversational-context embeddings. Figure 3 shows the relative improvements in the accuracy on the Dev set (5.2) over the baseline “non-conversational” model. We show the improvements on the two different methods of merging the contextual embeddings, namely mean and concatenation. Typically increasing the receptive field of the conversational-context helps improve the model. However, as the number of utterence history increased, the number of trainable parameters of the concatenate model increased making it harder for the model to train. This led to a reduction in the accuracy. We also found that using 5-utterance history with concatenation performed best (15%) on the SWBD set, and using 9-number of utterance history with mean method performed best (5%) on CH set. We also observed that the improvement diminished when we used 9-utterance history for SWBD set, unlike CH set. One possible explanation is that the conversational-context may not be relevant to the current utterance prediction or the model is overfitting. 2 4 6 8 10.5 11 11.5 12 12.5 # of utterance history Relative Improvement(%) Mean Concat Figure 3: The relative improvement in Development accuracy over sets over baseline obtained by using conversational-context embeddings with different number of utterance history and different merging techniques. 6.4 Sampling technique 0 20 40 60 80 100 0 1 2 3 Utterance Sampling Rate(%) Relative Improvement(%) Accuracy on Dev. set Figure 4: The relative improvement in Development accuracy over 100% sampling rate which was used in (Kim and Metze, 2018) obtained by using conversational-context embeddings with different sampling rate. We also experiment with an utterance level sampling strategy with various sampling ratio, [0.0, 0.2, 0.5, 1.0]. Sampling techniques have been extensively used in sequence prediction tasks to reduce overfitting (Bengio et al., 2015) by training the model conditioning on generated tokens from the model itself, which is how the model actually do at inference, rather than the groundtruth tokens. Similar to choosing previous word tokens from the ground truth or from the model output, we apply it to choose previous utterance from the ground truth or from the model output for learning conversational-context embeddings. Fig1138 ure 4 shows the relative improvement in the development accuracy (5.2) over the 1.0 sampling rate which is always choosing model’s output. We found that a sampling rate of 20% performed best. 6.5 Analysis of context embeddings We develop a scoring function, s(i, j) to check if our model conserves the conversational consistency for validating the accuracy improvement of our approach. The scoring function measures the average of the conversational distances over every consecutive hypotheses generated from a particular model. The conversational distance is calculated by the Euclidean distance, dist(ei, ej) of the fixed-length vectors ei, ej which represent the model’s i, j-th hypothesis, respectively. To obtain a fixed-length vector, utterance embedding, given the model hypothesis, we use BERT sentence embedding as an oracle. Mathematically it can be written as, s(i, j) = 1 N X i,j∈eval (dist(ei, ej)) where, i, j is a pair of consecutive hypotheses in evaluation data eval, N is the total number of i, j pairs, ei, ej are BERT embeddings. In our experiment, we select the pairs of consecutive utterances from the reference that show lower distance score at least baseline hypotheses. From this process, we obtained three conversational distance scores from 1) the reference transcripts, 2) the hypotheses of our vanilla conversational model which is not using BERT, and 3) the hypotheses of our baseline model. Figure 5 shows the score comparison. Figure 5: Comparison of the conversational distance score on the consecutive utterances of 1) reference, 2) our proposed conversational end-to-end model, and 3) our end-to-end baseline model. We found that our proposed model was 7.4% relatively closer to the reference than the baseline. This indicates that our conversational-context embedding leads to improved similarity across adjacent utterances, resulting in better processing a long conversation. 7 Conclusion We have introduced a novel method for conversational-context aware end-to-end speech recognition based on a gated network that incorporates word/sentence/speech embeddings. Unlike prior work, our model is trained on conversational datasets to predict a word, conditioning on multiple preceding conversational-context representations, and consequently improves recognition accuracy of a long conversation. Moreover, our gated network can incorporate effectively with text-based external resources, word or sentence embeddings (i.e., fasttext, BERT) within an end-to-end framework and so that the whole system can be optimized towards our final objectives, speech recognition accuracy. By incorporating external embeddings with gating mechanism, our model can achieve further improvement with better conversational-context representation. We evaluated the models on the Switchboard conversational speech corpus and show that our proposed model using gated conversational-context embedding show 15%, 5% relative improvement in WER compared to a baseline model for Switchboard and CallHome subsets respectively. Our model was shown to outperform standard end-to-end speech recognition models trained on isolated sentences. This work is easy to scale and can potentially be applied to any speech related task that can benefit from longer context information, such as spoken dialog system, sentimental analysis. Acknowledgments We gratefully acknowledge the support of NVIDIA Corporation with the donation of the Titan Xp GPU used for this research. This work also used the Bridges system, which is supported by NSF award number ACI-1445606, at the Pittsburgh Supercomputing Center (PSC). References Uri Alon, Golan Pundak, and Tara N Sainath. 2019. Contextual speech recognition with difficult negative training examples. In ICASSP 2019-2019 IEEE International Conference on Acoustics, Speech and 1139 Signal Processing (ICASSP), pages 6440–6444. IEEE. John Arevalo, Thamar Solorio, Manuel Montes-y G´omez, and Fabio A Gonz´alez. 2017. Gated multimodal units for information fusion. arXiv preprint arXiv:1702.01992. Kartik Audhkhasi, Brian Kingsbury, Bhuvana Ramabhadran, George Saon, and Michael Picheny. 2018. Building competitive direct acoustics-to-word models for english conversational speech recognition. In 2018 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), pages 4759–4763. IEEE. Kartik Audhkhasi, Bhuvana Ramabhadran, George Saon, Michael Picheny, and David Nahamoo. 2017. Direct acoustics-to-word models for english conversational speech recognition. CoRR, abs/1703.07754. Dzmitry Bahdanau, Kyunghyun Cho, and Yoshua Bengio. 2015. Neural machine translation by jointly learning to align and translate. ICLR. Dzmitry Bahdanau, Jan Chorowski, Dmitriy Serdyuk, Philemon Brakel, and Yoshua Bengio. 2016. Endto-end attention-based large vocabulary speech recognition. In 2016 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), pages 4945–4949. IEEE. Samy Bengio, Oriol Vinyals, Navdeep Jaitly, and Noam Shazeer. 2015. Scheduled sampling for sequence prediction with recurrent neural networks. In Advances in Neural Information Processing Systems, pages 1171–1179. Piotr Bojanowski, Edouard Grave, Armand Joulin, and Tomas Mikolov. 2017. Enriching word vectors with subword information. Transactions of the Association for Computational Linguistics, 5:135–146. William Chan, Navdeep Jaitly, Quoc Le, and Oriol Vinyals. 2016. Listen, attend and spell: A neural network for large vocabulary conversational speech recognition. In 2016 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), pages 4960–4964. IEEE. Jan K Chorowski, Dzmitry Bahdanau, Dmitriy Serdyuk, Kyunghyun Cho, and Yoshua Bengio. 2015. Attention-based models for speech recognition. In Advances in neural information processing systems, pages 577–585. Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2019. Bert: Pre-training of deep bidirectional transformers for language understanding. NAACL. John Godfrey and Edward Holliman. 1993. Switchboard-1 release 2 ldc97s62. Linguistic Data Consortium, Philadelphia, LDC97S62. John J Godfrey, Edward C Holliman, and Jane McDaniel. 1992. Switchboard: Telephone speech corpus for research and development. In Acoustics, Speech, and Signal Processing, 1992. ICASSP-92., 1992 IEEE International Conference on, volume 1, pages 517–520. IEEE. Alex Graves, Santiago Fern´andez, Faustino Gomez, and J¨urgen Schmidhuber. 2006. Connectionist temporal classification: labelling unsegmented sequence data with recurrent neural networks. In Proceedings of the 23rd international conference on Machine learning, pages 369–376. ACM. Alex Graves and Navdeep Jaitly. 2014. Towards endto-end speech recognition with recurrent neural networks. In Proceedings of the 31st International Conference on Machine Learning (ICML-14), pages 1764–1772. Awni Hannun, Carl Case, Jared Casper, Bryan Catanzaro, Greg Diamos, Erich Elsen, Ryan Prenger, Sanjeev Satheesh, Shubho Sengupta, Adam Coates, et al. 2014. Deep speech: Scaling up end-to-end speech recognition. arXiv preprint arXiv:1412.5567. Takaaki Hori, Shinji Watanabe, Yu Zhang, and William Chan. 2017. Advances in joint ctc-attention based end-to-end speech recognition with a deep cnn encoder and rnn-lm. Interspeech. Yangfeng Ji, Trevor Cohn, Lingpeng Kong, Chris Dyer, and Jacob Eisenstein. 2016. Document context language models. ICLR (Workshop track). Armand Joulin, Edouard Grave, Piotr Bojanowski, Matthijs Douze, H´erve J´egou, and Tomas Mikolov. 2016. Fasttext. zip: Compressing text classification models. arXiv preprint arXiv:1612.03651. Armand Joulin, Edouard Grave, Piotr Bojanowski, and Tomas Mikolov. 2017. Bag of tricks for efficient text classification. In Proceedings of the 15th Conference of the European Chapter of the Association for Computational Linguistics: Volume 2, Short Papers, pages 427–431. Association for Computational Linguistics. Suyoun Kim, Siddharth Dalmia, and Florian Metze. 2018. Situation informed end-to-end asr for chime-5 challenge. CHiME5 workshop. Suyoun Kim, Takaaki Hori, and Shinji Watanabe. 2017. Joint ctc-attention based end-to-end speech recognition using multi-task learning. In Acoustics, Speech and Signal Processing (ICASSP), 2017 IEEE International Conference on, pages 4835– 4839. IEEE. Suyoun Kim and Florian Metze. 2018. Dialogcontext aware end-to-end speech recognition. In 2018 IEEE Spoken Language Technology Workshop (SLT), pages 434–440. IEEE. 1140 Suyoun Kim and Florian Metze. 2019. Acoustic-toword models with conversational context information. NAACL. Suyoun Kim and Michael L Seltzer. 2018. Towards language-universal end-to-end speech recognition. In 2018 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), pages 4914–4918. IEEE. Jamie Kiros, William Chan, and Geoffrey Hinton. 2018. Illustrative language understanding: Largescale visual grounding with image search. In Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), volume 1, pages 922–933. Jinyu Li, Guoli Ye, Amit Das, Rui Zhao, and Yifan Gong. 2018. Advancing acoustic-to-word ctc model. In 2018 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), pages 5794–5798. IEEE. Bing Liu and Ian Lane. 2017. Dialog context language modeling with recurrent neural networks. In Acoustics, Speech and Signal Processing (ICASSP), 2017 IEEE International Conference on, pages 5715– 5719. IEEE. Yajie Miao, Mohammad Gowayyed, and Florian Metze. 2015. EESEN: End-to-end speech recognition using deep RNN models and WFST-based decoding. In 2015 IEEE Workshop on Automatic Speech Recognition and Understanding (ASRU), pages 167–174. IEEE. Yajie Miao, Mohammad Gowayyed, Xingyu Na, Tom Ko, Florian Metze, and Alexander Waibel. 2016. An empirical exploration of ctc acoustic models. In 2016 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), pages 2623–2627. IEEE. Tom´aˇs Mikolov, Martin Karafi´at, Luk´aˇs Burget, Jan ˇCernock`y, and Sanjeev Khudanpur. 2010. Recurrent neural network based language model. In Eleventh Annual Conference of the International Speech Communication Association. Tomas Mikolov and Geoffrey Zweig. 2012. Context dependent recurrent neural network language model. SLT, 12:234–239. Shruti Palaskar and Florian Metze. 2018. Acoustic-toword recognition with sequence-to-sequence models. In 2018 IEEE Spoken Language Technology Workshop (SLT), pages 397–404. IEEE. Razvan Pascanu, Tomas Mikolov, and Yoshua Bengio. 2013. On the difficulty of training recurrent neural networks. In International Conference on Machine Learning, pages 1310–1318. Adam Paszke, Sam Gross, Soumith Chintala, Gregory Chanan, Edward Yang, Zachary DeVito, Zeming Lin, Alban Desmaison, Luca Antiga, and Adam Lerer. 2017. Automatic differentiation in pytorch. In NIPS-W. Jeffrey Pennington, Richard Socher, and Christopher Manning. 2014. Glove: Global vectors for word representation. In Proceedings of the 2014 conference on empirical methods in natural language processing (EMNLP), pages 1532–1543. Matthew E. Peters, Mark Neumann, Mohit Iyyer, Matt Gardner, Christopher Clark, Kenton Lee, and Luke Zettlemoyer. 2018. Deep contextualized word representations. Daniel Povey, Vijayaditya Peddinti, Daniel Galvez, Pegah Ghahremani, Vimal Manohar, Xingyu Na, Yiming Wang, and Sanjeev Khudanpur. 2016. Purely sequence-trained neural networks for asr based on lattice-free mmi. In Interspeech, pages 2751–2755. Golan Pundak, Tara N Sainath, Rohit Prabhavalkar, Anjuli Kannan, and Ding Zhao. 2018. Deep context: end-to-end contextual speech recognition. In 2018 IEEE Spoken Language Technology Workshop (SLT), pages 418–425. IEEE. Alec Radford, Jeff Wu, Rewon Child, David Luan, Dario Amodei, and Ilya Sutskever. 2019. Language models are unsupervised multitask learners. Ramon Sanabria and Florian Metze. 2018. Hierarchical multitask learning with ctc. In 2018 IEEE Spoken Language Technology Workshop (SLT), pages 485–490. IEEE. Min Joon Seo, Aniruddha Kembhavi, Ali Farhadi, and Hannaneh Hajishirzi. 2017. Bidirectional attention flow for machine comprehension. CoRR, abs/1611.01603. Hagen Soltau, Hank Liao, and Hasim Sak. 2017. Neural speech recognizer: Acoustic-to-word lstm model for large vocabulary speech recognition. Interspeech. Ilya Sutskever, Oriol Vinyals, and Quoc VV Le. 2014. Sequence to sequence learning with neural networks. In Advances in neural information processing systems, pages 3104–3112. Tian Wang and Kyunghyun Cho. 2016. Larger-context language modelling. ACL. Shinji Watanabe, Takaaki Hori, Shigeki Karita, Tomoki Hayashi, Jiro Nishitoba, Yuya Unno, Nelson Enrique Yalta Soplin, Jahn Heymann, Matthew Wiesner, Nanxin Chen, Adithya Renduchintala, and Tsubasa Ochiai. 2018. Espnet: End-to-end speech processing toolkit. In Interspeech, pages 2207– 2211. Shinji Watanabe, Takaaki Hori, Suyoun Kim, John R Hershey, and Tomoki Hayashi. 2017. Hybrid ctc/attention architecture for end-to-end speech recognition. IEEE Journal of Selected Topics in Signal Processing, 11(8):1240–1253. 1141 Wayne Xiong, Lingfeng Wu, Jun Zhang, and Andreas Stolcke. 2018. Session-level language modeling for conversational speech. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, pages 2764–2768. Matthew D Zeiler. 2012. Adadelta: an adaptive learning rate method. arXiv preprint arXiv:1212.5701. Albert Zeyer, Kazuki Irie, Ralf Schl¨uter, and Hermann Ney. 2018. Improved training of end-to-end attention models for speech recognition. Interspeech. Yu Zhang, William Chan, and Navdeep Jaitly. 2017. Very deep convolutional networks for end-to-end speech recognition. In Acoustics, Speech and Signal Processing (ICASSP), 2017 IEEE International Conference on, pages 4845–4849. IEEE. Geoffrey Zweig, Chengzhu Yu, Jasha Droppo, and Andreas Stolcke. 2017. Advances in all-neural speech recognition. In Acoustics, Speech and Signal Processing (ICASSP), 2017 IEEE International Conference on, pages 4805–4809. IEEE.
2019
107
Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 1142–1147 Florence, Italy, July 28 - August 2, 2019. c⃝2019 Association for Computational Linguistics 1142 Figurative Usage Detection of Symptom Words to Improve Personal Health Mention Detection Adith Iyer♠♣, Aditya Joshi♠, Sarvnaz Karimi♠, Ross Sparks♠, C´ecile Paris♠ ♠CSIRO Data61, Sydney, Australia ♣University of Queensland, Brisbane, Australia [email protected], {firstname.lastname}@csiro.au Abstract Personal health mention detection deals with predicting whether or not a given sentence is a report of a health condition. Past work mentions errors in this prediction when symptom words, i.e., names of symptoms of interest, are used in a figurative sense. Therefore, we combine a state-of-the-art figurative usage detection with CNN-based personal health mention detection. To do so, we present two methods: a pipeline-based approach and a feature augmentation-based approach. The introduction of figurative usage detection results in an average improvement of 2.21% F-score of personal health mention detection, in the case of the feature augmentation-based approach. This paper demonstrates the promise of using figurative usage detection to improve personal health mention detection. 1 Introduction The World Health Organisation places importance on gathering intelligence about epidemics to be able to effectively respond to them (World Health Organisation, 2019). Natural language processing (NLP) techniques have been applied to social media datasets for epidemic intelligence (CharlesSmith et al., 2015). An important classification task in this area is personal health mention detection: to detect whether or not a text contains a personal health mention (PHM). A PHM is a report that either the author or someone they know is experiencing a health condition or a symptom (Lamb et al., 2013). For example, the sentence ‘I have been coughing since morning’ is a PHM, while ‘Having a cough for three weeks or more could be a sign of cancer’ is not. The former reports that the author has a cough while, in the latter, the author provides information about coughs in general. Past work in PHM detection uses classification-based approaches with human-engineered features (Lamb et al., 2013; Yin et al., 2015) or word embeddingbased features (Karisani and Agichtein, 2018). However, consider the quote ‘When Paris sneezes, Europe catches cold’ attributed to Klemens von Metternich1. The quote contains names of symptoms (referred to as ‘symptom words’ hereafter) ‘sneezes’ and ‘cold’. However, it is not a PHM, since the symptom words are used in a figurative sense. Since several epidemic intelligence tools based on social media rely on counts of keyword occurrences (Charles-Smith et al., 2015), figurative sentences may introduce errors. Figurative usage has been quoted as a source of error in past work (Jimeno Yepes et al., 2015; Karisani and Agichtein, 2018). In this paper, we deal with the question: Does personal health mention detection benefit from knowing if symptom words in a text were used in a literal or figurative sense? To address the question, we use a state-ofthe-art approach that detects idiomatic usage of words (Liu and Hwa, 2018). Given a word and a sentence, the approach identifies if the word is used in a figurative or literal sense in the sentence. We refer to this module as ‘figurative usage detection’. We experiment with alternative ways to combine figurative usage detection with PHM detection, and report results on a manually labeled dataset of tweets. 2 Motivation As the first step, we ascertain if the volume of figurative usage of symptom words warrants such attention. Therefore, we randomly selected 200 tweets (with no duplicates and retweets) posted in November 2018, each containing either ‘cough’ or ‘breath’. After discarding tweets with garbled text, 1https://bit.ly/2VoqTif ; Accessed on 23rd April, 2019. 1143 two annotators manually annotated each tweet with the labels ‘figurative’ or ‘literal’ to answer the question: ‘Has the symptom word been mentioned in a figurative or literal manner?’. Note that, (a) in the tweet ‘When it’s raining cats and dogs and you’re down with a cough!’, the symptom usage is literal, and (b) Hyperbole (for example, ‘soon I’ll cough my entire lungs up’) is considered to be literal. The two annotators agreed on a label 93.96% of the time. Cohen’s kappa coefficient for interrater agreement is 0.8778, indicating a high agreement. For 52.75% of these tweets, both annotators assign the label as figurative. This provides only an estimate of the volume of figurative usage of symptom words. We also expect that the estimate would differ for different symptom words. 3 Approach We now introduce the approaches for figurative usage and PHM detection. Following that, we present two alternative approaches to interface figurative usage detection with PHM detection: the pipeline approach and the feature augmentation approach. 3.1 Figurative Usage Detection In the absence of a health-related dataset labeled with figurative usage of symptom words, we implement the unsupervised approach to detect idioms introduced in Liu and Hwa (2018). This forms the figurative usage detection module. The input to the figurative usage detection module is a target keyword and a sentence, and the output is whether or not the keyword is used in a figurative sense. The approach can be summarised in two steps: computation of a literal usage score for target keyword followed by a LDA-based estimator to predict the label. To compute the literal usage score, Liu and Hwa (2018) first generate a set of words that are related to the target keywords (symptom words, in our case). This set is called the ‘literal usage representation’. The literal usage score is computed as the average similarity between the words in the sentence and the words in the literal usage representation. Thus, this score is a real value between 0 and 1 (where 1 is literal and 0 is figurative). The score is then concatenated with linguistic features (described later in this section). The second step is a Latent Dirichlet Allocation (LDA)-based estimator. The estimator computes two distributions: the wordFigure 1: PHM detection. Figure 2: Pipeline approach. figurative/literal distribution which indicates the probability of a word to be either figurative or literal, and a document-figurative/literal distribution which gives a predictive score for a document to be literal or figurative. To obtain the literal usage score, we generate the literal usage representation using word2vec similarity learned from the Sentiment140 tweet dataset (Go et al., 2009). We use two sets of linguistic features, as reported in Liu and Hwa (2018): the presence of subordinate clauses and part-of-speech tags of neighbouring words, using Stanford CoreNLP (Manning et al., 2014). We adapt the abstractness feature in their paper to health-relatedness (i.e., the presence of health-related words). The intuition is that tweets which contain more health-related words are more likely to be using the symptom words in a literal sense instead of figurative. Therefore, the abstractness feature in the original paper is converted to domain relatedness and captured using the presence of health-related words. We consider the symptom word as the target word. It must be noted that we do not have or use figurative labels in the dataset except for the sample used to report the efficacy of figurative usage detection. 3.2 PHM Detection We use a CNN-based classifier for PHM detection, as shown in Figure 1. The tweet is converted 1144 to its sentence representation using a concatenation of embeddings of the constituent words, padded to a maximum sequence length. The embeddings are initialised based on pre-trained word embeddings. We experiment with three alternatives of pre-trained word embeddings, as elaborated in Section 4. These are then passed to three sets of convolutional layers with max pooling and dropout layers. A dense layer is finally used to make the prediction. 3.3 Interfacing Figurative Usage Detection with PHM Detection We consider two approaches to interface figurative usage detection with PHM detection: 1. Pipeline Approach places the two modules in a pipeline, as illustrated in Figure 2. If the figurative usage detection module predicts a usage as figurative, the PHM detection classifier is bypassed and the tweet is predicted to not be a PHM. If the figurative usage prediction is literal, then the prediction from the PHM detection module is returned. We refer to this approach as ‘+Pipeline’. 2. Feature Augmentation Approach augments PHM detection with figurative usage features. Therefore, the figurative label and the linguistic features from figurative usage detection are concatenated as figurative usage features ad passed through a convolution layer. The two are then concatenated in a dense layer to make the prediction. The approach is illustrated in Figure 3. This approach is based on Dasgupta et al. (2018), where they augment additional features to word embeddings of words in a document. We refer to this approach as ‘+FeatAug’. In +Pipeline, the figurative label guides whether or not PHM detection will be called. In +FeatAug, the label becomes one of the features. For both the approaches, the figurative label is determined by producing the literal usage score and then applying an empirically determined threshold. We experimentally determine if using the literal usage score performs better than using the LDA-based estimator (See Section 4.3). Figure 3: Feature augmentation approach. 4 Experiment Setup 4.1 Dataset We report our results on a dataset introduced and referred to by Karisani and Agichtein (2018) as the PHM2017 dataset. This dataset consists of 5837 tweets related to a collection of diseases: Alzheimer’s (1103, 16.7% PHM), heart attack (973, 12.4% PHM), Parkinson’s (868, 9.8% PHM), cancer (988, 20.6% PHM), depression (924, 38.5% PHM) and stroke (981, 14.2% PHM). The imbalance in the class labels of the dataset must be noted. Some tweets in the original paper could not be downloaded due to deletion or privacy settings. 4.2 Configuration For PHM detection (PHMD) and the two combined approaches (+Pipeline and +FeatAug), the parameters are empirically determined as: 1. PHMD: Filters=100, Kernels=(3, 4, 5), Pool size=2; Dropout=(0.2, 0.3, 0.5). 2. Figurative Usage Detection: The figurative label is predicted using a threshold for the literal usage score. This threshold is set to 0.2. This holds for both +Pipeline and +FeatAug. In the case of +Pipeline, a tweet is predicted as figurative, and, as a result, non-PHM, if the literal usage score is lower than 0.2. In the case of +FeatAug, the figurative label based on the score is added along with other features. 3. +FeatAug: Filters=100; Kernel size (left)=(3, 4, 5), Pool size=2; Dropout=(0.3, 0.1, 0.3); Kernel size (right)=2. All experiments use the Adam optimiser and a batch size of 128, and trained for 35 epochs. CNN experiments use the ReLU activation. We 1145 Random word2vec GloVe Numberbatch Approach P R F P R F P R F P R F PHMD 59.40 39.84 46.31 57.85 47.24 50.99 68.71 50.70 57.05 59.07 43.59 48.63 +Pipeline 59.99 33.65 41.78 57.84 40.80 46.62 67.93 43.25 51.51 59.09 36.74 43.69 +FeatAug 54.51 45.01 48.08 57.11 51.71 53.13 66.70 53.52 58.25 54.48 48.75 50.45 Table 1: Performance of PHM Detection (PHMD), +Pipeline and +FeatAug for four word embedding initialisations. P: Precision, R: Recall, and F: F-score. GloVe+MeSH GloVe+WordNet GloVe+Symptom Approach P R F P R F P R F PHMD 56.95 41.47 46.62 56.41 42.94 47.55 57.57 42.93 47.72 +Pipeline 56.01 34.98 41.75 55.86 36.63 43.12 57.10 36.34 42.90 +FeatAug 53.71 46.46 49.01 55.88 48.47 51.15 56.04 48.11 50.30 Table 2: Performance of PHM Detection (PHMD), +Pipeline and +FeatAug initialised with GloVe word embeddings retrofitted with three ontologies: MeSH, WordNet and Symptom. P: Precision, R: Recall, and F: F-score. P R F ∆F PHMD 59.48 44.18 49.26 +Pipeline 59.12 37.60 44.48 -4.78 +FeatAug 57.32 48.88 51.48 +2.21 Table 3: Average performance of PHM Detection (PHMD), +Pipeline and +FeatAug across the seven word embedding initialisations; P: Precision, R: Recall, F: F-score; ∆F: Difference in the F-score in comparison with PHMD. Disease PHMD +FeatAug Alzheimer’s 65.33 68.48 Heart attack 46.96 45.98 Parkinson’s 48.83 51.49 Cancer 53.69 54.58 Depression 70.48 71.34 Stroke 57.03 57.65 Table 4: Impact of figurative usage detection for PHM Detection (PHMD) on individual diseases. use seven types of initialisations for the word embeddings. The first four are a random initialisation, and three pre-trained embeddings. The pretrained embeddings are: (a) word2vec (Mikolov et al., 2013); (b) GloVe (trained on Common Crawl) (Pennington et al., 2014); and, (c) Numberbatch (Speer et al., 2017). The next three are embeddings retrofitted with three ontologies. We use three ontologies to retrofit GloVe embeddings using the method by Faruqui et al. (2015). The ontologies are: (a) MeSH,2 (b) Symptom3, and (c) WordNet (Miller, 1995). The results are averaged across 10-fold cross-validation. 4.3 Evaluation of Figurative Usage Detection To validate the performance of figurative usage detection, we use the dataset of tweets described in Section 2. The tweets contain symptom words that have been manually labeled. We obtain an F-score of (a) 76.46% when only the literal usage score is used, and (b) 69.72% when the LDA-based estimator is also used. Therefore, we use the literal usage score along with the figurative usage features for our experiments. 5 Results The effectiveness of PHMD, +Pipeline and +FeatAug for the four kinds of word embedding initialisations is shown in Table 1. In each of these cases, +FeatAug performs better than PHMD, while +Pipeline results in a degradation. We note that, for both +FeatAug and +Pipeline, the recall is impacted in comparison with PHMD. Similar trends are observed for the retrofitted embeddings, as shown in Table 2. The improvement when figurative usage detection is used is 2https://www.nlm.nih.gov/mesh/ meshhome.html; Accessed on 23rd April, 2019. 3https://bioportal.bioontology.org/ ontologies/SYMP; Accessed on 23rd April, 2019. 1146 higher in the case of retrofitted embeddings than in the previous case. The highest improvement (47.55% to 51.15%) is when GloVe embeddings are retrofitted with WordNet. A minor observation is that the F-scores are lower than GloVe without the retrofitting, highlighting that retrofitting may not always result in an improvement. Table 3 shows the average performance across the seven types of word embedding initialisations. The +Pipeline approach results in a degradation of 4.78%. This shows that merely discarding tweets where the symptom word usage was predicted as figurative may not be useful. This could be because the figurative usage detection technique is not free from errors. In contrast though, for +FeatAug, there is an improvement of 2.21%. This shows that our technique of augmenting with the figurative usage-based features is beneficial. The improvement of 2.21% may seem small as compared to the prevalence of figurative tweets as described in Section 2. However, all tweets with figurative usage may not have been mis-classified by PHMD. The improvement shows that a focus on figurative usage detection helps PHMD. Finally, the F-scores for PHMD with +FeatAug with GloVe embeddings for the different illnesses, available as a part of the annotation in the dataset, is compared in Table 4. Our observation that heart attack results in the lowest F-score, is similar to the one reported in the original paper. At the same time, we observe that, except for heart attack, all illnesses witness an improvement in the case of +FeatAug. 6 Error Analysis Typical errors made by our approach are: • Indirect reference: Some tweets convey an infection by implication. For example, ‘don’t worry I got my face mask Charlotte, you will not catch the flu from me!’ does not specifically state that someone has influenza. • Health words: In the case of stroke or heart attack, we obtain false negatives because many tweets do not contain other associated health words. Similarly, in the case of depression, some words like ‘addiction’, ‘mental’, ‘anxiety’ appear which were not a part of the related health words taken into account. • Sarcasm or humour: Some mis-classified tweets appear to be sarcastic or joking. For example, ‘I’m trying to overcome depression and I need reasons to get out the house lol’. Here, the person is being humorous (indicated by ‘lol’) but the usage of the symptom word ‘depression’ is literal. 7 Related Work Several approaches for PHM detection have been reported (Joshi et al., 2019). Lamb et al. (2013) incorporate linguistic features such as word classes, stylometry and part of speech patterns. Yin et al. (2015) use similar stylistic features like hashtags and emojis. Karisani and Agichtein (2018) implement another approach of partitioning and distorting the word embedding space to better detect PHMs, obtaining a best F-score of 69%. While we use their dataset, they use a statistical classifier while we use a deep learning-based classifier. For figurative usage detection, supervised (Liu and Hwa, 2017) as well as unsupervised (Sporleder and Li, 2009; Liu and Hwa, 2018; Muzny and Zettlemoyer, 2013; Jurgens and Pilehvar, 2015) methods have been reported. We pick the work by Liu and Hwa (2018) assuming that it is state-of-the-art. 8 Conclusions We employed a state-of-the-art method in figurative usage detection to improve the detection of personal health mentions (PHMs) in tweets. The output of this method was combined with classifiers for detecting PHMs in two ways: (1) a simple pipeline-based approach, where the performance of PHM detection degraded; and, (2) a feature augmentation-based approach where the performance of PHM detection improved. Our observations demonstrate the promise of using figurative usage detection for PHM detection, while highlighting that a simple pipeline-based approach may not work. Other ways of combining the two modules, more sophisticated classifiers for both PHM detection and figurative usage detection, are possible directions of future work. Also, a similar application to improve disaster mention detection could be useful (for figurative sentences such as ‘my heart is on fire’). Acknowledgment Adith Iyer was funded by the CSIRO Data61 Vacation Scholarship. The authors thank the anonymous reviewers for their helpful comments. 1147 References Lauren Charles-Smith, Tera Reynolds, Mark Cameron, Mike Conway, Eric Lau, Jennifer Olsen, Julie Pavlin, Mika Shigematsu, Laura Streichert, Katie Suda, et al. 2015. Using social media for actionable disease surveillance and outbreak management: A systematic literature review. PloS one, 10(10):e0139701. Tirthankar Dasgupta, Abir Naskar, Lipika Dey, and Rupsa Saha. 2018. Augmenting textual qualitative features in deep convolution recurrent neural network for automatic essay scoring. In Proceedings of the 5th Workshop on Natural Language Processing Techniques for Educational Applications, pages 93– 102, Melbourne, Australia. Association for Computational Linguistics. Manaal Faruqui, Jesse Dodge, Sujay Kumar Jauhar, Chris Dyer, Eduard Hovy, and Noah A. Smith. 2015. Retrofitting word vectors to semantic lexicons. In Proceedings of the 2015 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 1606–1615, Denver, Colorado. Association for Computational Linguistics. Alec Go, Richa Bhayani, and Lei Huang. 2009. Twitter sentiment classification using distant supervision. CS224N Project Report, Stanford, 1(12). Antonio Jimeno Yepes, Andrew MacKinlay, and Bo Han. 2015. Investigating public health surveillance using twitter. In Proceedings of BioNLP 15, pages 164–170, Beijing, China. Association for Computational Linguistics. Aditya Joshi, Sarvnaz Karimi, Ross Sparks, Cecile Paris, and C Raina MacIntyre. 2019. Survey of text-based epidemic intelligence: A computational linguistic perspective. arXiv preprint arXiv:1903.05801. David Jurgens and Mohammad Taher Pilehvar. 2015. Reserating the awesometastic: An automatic extension of the WordNet taxonomy for novel terms. In Proceedings of the 2015 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 1459–1465, Denver, Colorado. Association for Computational Linguistics. Payam Karisani and Eugene Agichtein. 2018. Did you really just have a heart attack?: Towards robust detection of personal health mentions in social media. In Proceedings of the World Wide Web Conference, pages 137–146, Lyon, France. Alex Lamb, Michael J. Paul, and Mark Dredze. 2013. Separating fact from fear: Tracking flu infections on twitter. In Proceedings of the 2013 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 789–795, Atlanta, Georgia. Association for Computational Linguistics. Changsheng Liu and Rebecca Hwa. 2017. Representations of context in recognizing the figurative and literal usages of idioms. In Proceedings of the AAAI Conference on Artificial Intelligence, pages 3230– 3236, San Francisco, CA. Changsheng Liu and Rebecca Hwa. 2018. Heuristically informed unsupervised idiom usage recognition. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, pages 1723–1731, Brussels, Belgium. Association for Computational Linguistics. Christopher D. Manning, Mihai Surdeanu, John Bauer, Jenny Finkel, Steven J. Bethard, and David McClosky. 2014. The Stanford CoreNLP natural language processing toolkit. In Association for Computational Linguistics (ACL) System Demonstrations, pages 55–60. Tomas Mikolov, Ilya Sutskever, Kai Chen, Greg S Corrado, and Jeff Dean. 2013. Distributed representations of words and phrases and their compositionality. In Advances in neural information processing systems, pages 3111–3119, Lake Tahoe, NV. George A Miller. 1995. WordNet: A lexical database for English. Communications of the ACM, 38(11):39–41. Grace Muzny and Luke Zettlemoyer. 2013. Automatic idiom identification in Wiktionary. In Proceedings of the 2013 Conference on Empirical Methods in Natural Language Processing, pages 1417–1421, Seattle, Washington, USA. Association for Computational Linguistics. Jeffrey Pennington, Richard Socher, and Christopher Manning. 2014. Glove: Global vectors for word representation. In Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing (EMNLP), pages 1532–1543, Doha, Qatar. Association for Computational Linguistics. Robyn Speer, Joshua Chin, and Catherine Havasi. 2017. ConceptNet 5.5: An open multilingual graph of general knowledge. In Proceedings of the Conference on Artificial Intelligence, pages 4444–4451, San Francisco, CA. Caroline Sporleder and Linlin Li. 2009. Unsupervised recognition of literal and non-literal use of idiomatic expressions. In Proceedings of the 12th Conference of the European Chapter of the ACL (EACL 2009), pages 754–762, Athens, Greece. Association for Computational Linguistics. World Health Organisation. 2019. Epidemic intelligence - systematic event detection. https: //www.who.int/csr/alertresponse/ epidemicintelligence/en/. [Online; accessed 24-January-2019]. Zhijun Yin, Daniel Fabbri, S Trent Rosenbloom, and Bradley Malin. 2015. A scalable framework to detect personal health mentions on Twitter. Journal of Medical Internet Research, 17(6):e138.
2019
108
Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 1148–1153 Florence, Italy, July 28 - August 2, 2019. c⃝2019 Association for Computational Linguistics 1148 Complex Word Identification as a Sequence Labelling Task Sian Gooding Dept of Computer Science and Technology University of Cambridge [email protected] Ekaterina Kochmar ALTA Institute University of Cambridge [email protected] Abstract Complex Word Identification (CWI) is concerned with detection of words in need of simplification and is a crucial first step in a simplification pipeline. It has been shown that reliable CWI systems considerably improve text simplification. However, most CWI systems to date address the task on a word-by-word basis, not taking the context into account. In this paper, we present a novel approach to CWI based on sequence modelling. Our system is capable of performing CWI in context, does not require extensive feature engineering and outperforms state-of-the-art systems on this task. 1 Introduction Lexical complexity is one of the main aspects contributing to overall text complexity (Dubay, 2004). It is typically addressed with lexical simplification (LS) systems that aim to paraphrase and substitute complex terms for simpler alternatives. Previous research has shown that Complex Word Identification (CWI) considerably improves lexical simplification (Shardlow, 2014; Paetzold and Specia, 2016a). This is achieved by identifying complex terms in text prior to word substitution. The performance of a CWI component is crucial, as low recall of this component might result in an overly difficult text with many missed complex words, while low precision might result in meaning distortions with an LS system trying to unnecessarily simplify non-complex words (Shardlow, 2013). CWI has recently attracted attention as a standalone application, with at least two shared tasks focusing on it. Current approaches to CWI, including state-of-the-art systems, have a number of limitations. First of all, CWI systems typically address this task on a word-by-word basis, using a large number of features to capture the complexity of a word. For instance, the CWI system by Paetzold and Specia (2016c) uses a total of 69 features, while the one by Gooding and Kochmar (2018) uses 27 features. Secondly, systems performing CWI in a static manner are unable to take the context into account, thus failing to predict word complexity for polysemous words as well as words in various metaphorical or novel contexts. For instance, consider the following two contexts of the word molar from the CWI 2018 shared task (Yimam et al., 2018). Molar has been annotated as complex in the first context (resulting in the binary annotation of 1) by 17 out of 20 annotators (thus, the “probabilistic” label of 0.85), and as non-complex (label 0) in the second context: Contexts Bin Prob Elephants have four molars... 1 0.85 ... new molars emerge in the back of the mouth. 0 0.00 The annotators may have found the second context simpler on the whole, as molars is surrounded by familiar words that imply the meaning (e.g., mouth), whereas elephants is a rarer and less semantically similar co-occurrence. Such contextrelated effects are hard to capture with a CWI system that only takes word-level features into account. Thirdly, CWI systems that only look at individual words cannot grasp complexity above the word level, for example, when a whole phrase is considered complex. In this paper, we apply a novel approach to the CWI, based on sequence labelling.1 We show that our system is capable of: • taking word context into account; • relying on word embeddings only, thus eliminating the need for extensive feature engineering; • detecting both complex words and phrases; 1Trained models are available at: https://github. com/siangooding/cwi 1149 • not requiring genre-specific training and representing a one-model-fits-all approach. 2 Related Work 2.1 Complex Word Identification Early studies on CWI address this task by either attempting to simplify all words (Thomas and Anderson, 2012; Bott et al., 2012) or setting a frequency-based threshold (Zeng et al., 2005; Elhadad, 2006; Biran et al., 2011). Horn et al. (2014) show that the former approach may miss up to one third of complex words due to its inability to find simpler alternatives, and Shardlow (2013) argues that a simplify-all approach might result in meaning distortions, but the more resource-intensive threshold-based approach does not necessarily perform significantly better either. At the same time, Shardlow (2013) shows that a classification-based approach to CWI is the most promising one. Most of the teams participating in the recent CWI shared tasks also use classification approaches with extensive feature engineering. The first shared task on CWI at SemEval 2016 (Paetzold and Specia, 2016b) used data from several simplification datasets, annotated by non-native speakers. In this data, about 3% of word types and 11% of word tokens, if contexts are taken into account, are annotated as complex (Paetzold and Specia, 2016b). The CWI 2018 shared task (Yimam et al., 2018) used the data from Wikipedia, news sources and unprofessionally written news, derived from the dataset of Yimam et al. (2017). The dataset was annotated by 10 native and 10 non-native speakers, and, depending on the source of the data, contains 40% to 50% words labelled as complex in context. The dataset contains words and phrases with two labels each. The first label represents binary judgement with bin=1 if at least 1 annotator marked the word as complex in context, and bin=0 otherwise. The second label is a “probabilistic” label representing the proportion of the 20 annotators that labelled the item as complex. The importance of context when considering word complexity is exemplified well in this dataset, as 11.34% of items have different binary labels depending on the context they are used in. When considering probabilistic annotations, of the items labelled in different contexts 10.96% have at least a 5-annotator difference in complexity score in differing contexts. The dataset contains 104 instances with a 10-annotator difference between scores based on the context of the word. For instance, suspicion has been annotated 23 times: Word Unique Max Min σ suspicion 16 0.95 0.15 0.25 Of the 23 probabilistic annotations for suspicion 70% are unique. Max and min values show the largest difference in annotations for this word in context, with 19 annotators labelling it complex in one scenario and only 3 in another. Finally, σ represents the standard deviation of the probabilistic annotations for this word. In this paper, we use the data from the CWI 2018 shared task, which contains annotation for both words and word sequences (called phrases in the task), and represents three different genres of text. We focus on the binary setting (complex vs. non-complex) and compare our results to the winning system by Gooding and Kochmar (2018). 2.2 Sequence Labelling Sequence labelling has been applied successfully to a number of NLP tasks that rely on contextual information, such as named entity recognition, part-of-speech tagging and shallow parsing. Within this framework, the model receives as input a sequence of tokens (w1, ..., wT ) and predicts a label for each token as output. Typically, the input tokens are first mapped to a distributed vector space, resulting in a sequence of word embeddings (x1, ..., xT ). The use of word embeddings allows sequence models to learn similar representations for semantically or functionally similar words. Recent advances to sequential model frameworks have resulted in the models’ ability to infer representations for previously unseen words and to share information about morpheme-level regularities (Rei et al., 2016). Sequence labelling models benefit from the use of long short-term memory (LSTM) units (Gers et al., 2000), as these units can capture the long-term contextual dependencies in natural language. A variation of the traditional architecture, bi-directional LSTMs (BiLSTM) (Hochreiter and Schmidhuber, 1997), has proved highly successful at language tasks, as it is able to consider both the left and right contexts of a word, thus increasing the amount of relevant information available to the network. Similarly, the use of secondary learning objectives can increase the number of salient fea1150 tures and access to relevant information. For example, Rei (2017) shows that training a model to jointly predict surrounding words incentivises the discovery of useful features and associations that are unlikely to be discovered otherwise. From the perspective of CWI, it is clear that context greatly impacts the perceived difficulty of text. In this paper we investigate whether CWI can be framed as a sequence labelling task. 3 Implementation For our experiments, we use the English part of the CWI datasets from Yimam et al. (2017), which contains texts on professionally written NEWS, amateurishly written WIKINEWS, and WIKIPEDIA articles. The original data includes the annotation for a selected set of content words, which is provided alongside the full sentence and the word span. The annotation contains both binary (bin) and “probabilistic” (prob) labels as detailed in Section 2: Sentence Word Bin Prob They drastically... drastically 1 0.5 As the sequential model expects the complete word context as an input, we adapt the original format by tokenizing the sentences and including the annotation for each word token, using C for the annotated complex words and phrases, and N for those that are either annotated as non-complex in the original data or not included in it (e.g., function words), which results in the following format: They N drastically C ... We opted to use a sequential architecture by Rei (2017), as it has achieved state-of-the-art results on a number of NLP tasks, including error detection, which is similar to CWI in that it identifies relatively rare sequences of words in context. The design of this architecture is highly suited to the task of CWI as: (1) the use of a BiLSTM provides contextual information from both the left and right context of a target word; (2) the context is combined with both word and characterlevel representations (Rei et al., 2016); (3) this architecture uses a language modelling objective, which enables the model to learn better composition functions and to predict the probability of individual words in context. As previous work on CWI has consistently found word frequency and length to be highly informative features, we choose an architecture which utilises sub-word information and a language modelling objective. We use 300-dimensional GloVe embeddings as word representations (Pennington et al., 2014) and train the model on randomly shuffled texts from all three genres for 20 iterations. We train the model using word annotations and predict binary word scores using the output label probabilities. If the probability of a word belonging to the complex class is above 0.50, it is considered a complex word. For phrase-level binary prediction, we consider the phrases contained within the dataset. The complex class probability for each word, aside from stop words, is predicted and combined into a final average score. If this average is above a predefined threshold of 0.50 then the phrase is considered complex. 4 Results & Discussion Results: We report the results obtained with the sequence labelling (SEQ) model for the binary task and compare them to the current state-of-the-art in complex word identification, CAMB system by Gooding and Kochmar (2018), which achieved the best results across all binary and two probabilistic tracks in the CWI 2018 shared task (Yimam et al., 2018). The evaluation metric reported is the macro-averaged F1, as was used in the 2018 CWI shared task (Yimam et al., 2018). For the binary task, both words and phrases are considered correct if the system outputs the correct binary label. The CAMB system considers words irrespective of their context and relies on 27 features of various types, encoding lexical, syntactic, frequencybased and other types of information about individual words. The system uses Random Forests and AdaBoost for classification, but as Gooding and Kochmar (2018) report, the choice of the features, algorithm and training data depends on the genre. In addition, phrase classification is performed using a ‘greedy’ approach and simply labelling all phrases as complex. The results presented in Table 1 show that the SEQ system outperforms the CAMB system on all three genres on the task of binary complex word identification. The largest performance increase for words is on the WIKIPEDIA test set (+3.60%). Table 1 also shows that on the combined set of words and phrases (words+phrases) the two 1151 Test Set Macro F-Score CAMB SEQ Words Only NEWS 0.8633 0.8763 (+1.30) WIKINEWS 0.8317 0.8540 (+2.23) WIKIPEDIA 0.7780 0.8140 (+3.60) Words+Phrases NEWS 0.8736 0.8763 (+0.27) WIKINEWS 0.8400 0.8505 (+1.05) WIKIPEDIA 0.8115 0.8158 (+0.43) Table 1: SEQ vs. CAMB system results on words only and on words and phrases systems achieve similar results: the SEQ model beats the CAMB model only marginally, with the largest difference of +1.05% on the WIKINEWS data. However, it is worth highlighting that the CAMB system does not perform any phrase classification per se and simply marks all phrases as complex. Using the dataset statistics, we estimate that CAMB system achieves precision of 0.64. The SEQ model outperforms the CAMB system, achieving precision of 0.71. We note that the SEQ model is not only able to outperform the CAMB system on all datasets for both words only and words+phrases, but it also has a clear practical advantage: the only input information it uses at run time are word embeddings, whereas the CAMB system requires 27 features based on a variety of sources. In addition, the CAMB system needs to rely on individually tailored systems to maximize the results across datasets, whereas the SEQ model is a ‘one size fits all’ model that is able to work out-of-the-box across all datasets, achieving state-of-the-art performance by harnessing the power of word context, embeddings and character-level morphology. We additionally compare our results to the recent work by Maddela and Xu (2018), who show an improvement on the CWI systems with the use of additional ‘human-based’ features. Using an English lexicon of 15, 000 words with wordcomplexity ratings by human annotators, they are able to improve the scores of the winning CWI system from the 2016 shared task by Paetzold and Specia (2016c), and the nearest centroid (NC) approach by Yimam et al. (2017). They report the best F-score of 74.8 on the combined CWI 2018 shared task testset, achieved using the NC approach augmented with the complexity lexicon. We note that both CAMB and our SEQ model achieve significantly higher results. Discussion: To further analyze the results achieved by CAMB and SEQ on the test sets, we apply the McNemar statistical test (McNemar, 1947), which is comparable to the widely used paired t-test, and is most suitable for dichotomous dependent variables. Table 2 presents the contingency table for words only, and Table 3 for words+phrases: CAMB Correct CAMB Wrong SEQ Correct a=3002 b=205 SEQ Wrong c=145 d=349 Table 2: Contingency table for words only CAMB Correct CAMB Wrong SEQ Correct a=3443 b=207 SEQ Wrong c=145 d=457 Table 3: Contingency table for words+phrases Using the above values, the continuity corrected McNemar test (Edwards, 1948) estimates χ2 as: χ2 = (|b −c| −1)2 (b + c) (1) According to the test, the SEQ system achieves significantly better results than the CAMB system on words only (p = 0.0016, χ2 = 9.95) as well as on words+phrases (p = 0.0011, χ2 = 10.57). 349 word tokens, with 289 word types, are incorrectly labelled by both systems (see Table 2). Of these, 166 words are incorrectly identified as complex, and 183 are incorrectly identified as simple. Of the words that are not identified as complex by the SEQ model, 74% are marked as complex by only one annotator out of twenty, and 93% by one or two annotators. This highlights the idiosyncratic nature of the task and why it may be particularly challenging to address the complexity needs of all individuals with a single system. There are 205 word instances that are correctly classified by the SEQ model, but not by the CAMB system. 34% of these words the CAMB system correctly classifies in other contexts, but not when the context changes, for instance when the same words are used in unusual or metaphorical contexts. Table 4 presents some examples of the contexts where the SEQ model correctly identifies the complexity of the word, but CAMB model fails (LABEL stands for the gold standard label). 1152 Contexts CAMB SEQ LABEL Successive waves of bank sector reforms have failed 0 1 1 Diffraction occurs with all waves 0 0 0 Table 4: Context dependent annotations of the word waves We note that the SEQ model is able to correctly identify the complexity of the word waves when used in different contexts. The system outputs a score of 0.5692 for the first context (Successive waves of bank sector [...]) and 0.4704 for the second (Diffraction occurs with all waves), reflecting that the complexity level is dependent on the context. 5 Conclusions In this paper, we address the limitations of the existing CWI systems. Our SEQ model relies on sequence labelling and outperforms state-of-the-art systems with a one-model-fits-all approach. It is able to take context into account and classify both words and phrases in a unified framework, without the need for expensive feature engineering. Our future research will focus on the relative nature of complexity judgements and will use the SEQ model to predict complexity on a scale. We will also investigate whether the SEQ model may benefit from sources of information other than word embeddings and character-level morphology. Finally, we plan to investigate alternative methods to modelling phrase and multi-word expression complexity. Acknowledgments We thank Cambridge English for supporting this research via the ALTA Institute. We are also grateful to the anonymous reviewers for their valuable feedback. References Or Biran, Samuel Brody, and Noemie Elhadad. 2011. Putting it Simply: a Context-Aware Approach to Lexical Simplification. In Proceedings of the 49th Annual Meeting of the Association for Computational Linguistics: short papers, pages 496–501. Stefan Bott, Luz Rello, Biljana Drndarevic, and Horacio Saggion. 2012. Can Spanish Be Simpler? LexSiS: Lexical Simplification for Spanish. In Proceedings of COLING 2012: Technical Papers, pages 357–374. William H. Dubay. 2004. The Principles of Readability. Costa Mesa, CA: Impact Information. Allen L. Edwards. 1948. Note on the correction for continuity in testing the significance of the difference between correlated proportions. Psychometrika, 13(3):185–187. Noemie Elhadad. 2006. Comprehending Technical Texts: Predicting and Defining Unfamiliar Terms. In AMIA Annual Symposium Proceedings, pages 239–243. Felix A. Gers, J¨urgen Schmidhuber, and Fred Cummins. 2000. Learning to forget: Continual prediction with LSTM. Neural Computation, 12(10):2451–2471. Sian Gooding and Ekaterina Kochmar. 2018. CAMB at CWI Shared Task 2018: Complex Word Identification with Ensemble-Based Voting. In Proceedings of the Thirteenth Workshop on Innovative Use of NLP for Building Educational Applications, pages 184–194. Sepp Hochreiter and J¨urgen Schmidhuber. 1997. Long short-term memory. Neural computation, 9(8):1735–1780. Colby Horn, Cathryn Manduca, and David Kauchak. 2014. Learning a lexical simplifier using Wikipedia. In Proceedings of the 52nd ACL, pages 458–463. Mounica Maddela and Wei Xu. 2018. A WordComplexity Lexicon and A Neural Readability Ranking Model for Lexical Simplification. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing (EMNLP), pages 3749–3760. Quinn McNemar. 1947. Note on the sampling error of the difference between correlated proportions or percentages. Psychometrika, 12(2):153–157. Gustavo Paetzold and Lucia Specia. 2016a. PLUMBErr: An Automatic Error Identification Framework for Lexical Simplification. In Proceedings of the first international workshop on Quality Assessment for Text Simplification (QATS), pages 1–9. European Language Resources Association (ELRA). Gustavo Paetzold and Lucia Specia. 2016b. SemEval 2016 Task 11: Complex Word Identification. In Proceedings of the 10th International Workshop on Semantic Evaluation (SemEval-2016), pages 560–569. Gustavo Paetzold and Lucia Specia. 2016c. SV000gg at SemEval-2016 Task 11: Heavy Gauge Complex Word Identification with System Voting. In Proceedings of the 10th International Workshop on Semantic Evaluation (SemEval-2016), pages 969–974. 1153 Jeffrey Pennington, Richard Socher, and Christopher Manning. 2014. GloVe: Global Vectors for Word Representation. In Proceedings of the 2014 conference on empirical methods in natural language processing (EMNLP), pages 1532–1543. Marek Rei. 2017. Semi-supervised multitask learning for sequence labeling. In Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 2121–2130. Marek Rei, Gamal K.O. Crichton, and Sampo Pyysalo. 2016. Attending to characters in neural sequence labeling models. In Proceedings of COLING 2016, the 26th International Conference on Computational Linguistics: Technical Papers, pages 309–318. Matthew Shardlow. 2013. A Comparison of Techniques to Automatically Identify Complex Words. In Proceedings of the Student Research Workshop at the 51st Annual Meeting of the Association for Computational Linguistics (ACL), pages 103–109. Matthew Shardlow. 2014. Out in the open: Finding and categorising errors in the lexical simplification pipeline. In In Proceedings of the 9th LREC, pages 1583–1590. S. Rebecca Thomas and Sven Anderson. 2012. WordNet-Based Lexical Simplification of a Document. In Proceedings of KONVENS 2012 (Main track: oral presentations). Seid Muhie Yimam, Chris Biemann, Shervin Malmasi, Gustavo Paetzold, Lucia Specia, Sanja ˇStajner, Ana¨ıs Tack, and Marcos Zampieri. 2018. A Report on the Complex Word Identification Shared Task 2018. In Proceedings of the 13th Workshop on Innovative Use of NLP for Building Educational Applications, pages 66–78. Seid Muhie Yimam, Sanja ˇStajner, Martin Riedl, and Chris Biemann. 2017. CWIG3G2 - Complex Word Identification Task across Three Text Genres and Two User Groups. In Proceedings of the Eighth International Joint Conference on Natural Language Processing (Volume 2: Short Papers), pages 401– 407. Qing Zeng, Eunjung Kim, Jon Crowell, and Tony Tse. 2005. Biological and Medical Data Analysis. ISBMDA 2005. Lecture Notes in Computer Science, volume 3745 of ISBMDA 2005, chapter A Text Corpora-Based Estimation of the Familiarity of Health Terminology. Springer, Berlin, Heidelberg.
2019
109